00:00:00.001 Started by upstream project "autotest-per-patch" build number 131133 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.214 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:09.129 The recommended git tool is: git 00:00:09.129 using credential 00000000-0000-0000-0000-000000000002 00:00:09.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:09.146 Fetching changes from the remote Git repository 00:00:09.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:09.164 Using shallow fetch with depth 1 00:00:09.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:09.164 > git --version # timeout=10 00:00:09.178 > git --version # 'git version 2.39.2' 00:00:09.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:09.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:09.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:17.468 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:17.482 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:17.499 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:17.499 > git config core.sparsecheckout # timeout=10 00:00:17.513 > git read-tree -mu HEAD # timeout=10 00:00:17.533 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:17.556 Commit message: "scripts/kid: add issue 3551" 00:00:17.556 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:17.668 [Pipeline] Start of Pipeline 00:00:17.683 [Pipeline] library 00:00:17.685 Loading library shm_lib@master 00:00:17.685 Library shm_lib@master is cached. Copying from home. 00:00:17.706 [Pipeline] node 00:00:17.715 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:17.717 [Pipeline] { 00:00:17.727 [Pipeline] catchError 00:00:17.728 [Pipeline] { 00:00:17.738 [Pipeline] wrap 00:00:17.744 [Pipeline] { 00:00:17.751 [Pipeline] stage 00:00:17.753 [Pipeline] { (Prologue) 00:00:17.960 [Pipeline] sh 00:00:18.244 + logger -p user.info -t JENKINS-CI 00:00:18.257 [Pipeline] echo 00:00:18.259 Node: WFP6 00:00:18.267 [Pipeline] sh 00:00:18.570 [Pipeline] setCustomBuildProperty 00:00:18.582 [Pipeline] echo 00:00:18.584 Cleanup processes 00:00:18.590 [Pipeline] sh 00:00:18.876 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:18.876 835779 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:18.888 [Pipeline] sh 00:00:19.171 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:19.171 ++ grep -v 'sudo pgrep' 00:00:19.171 ++ awk '{print $1}' 00:00:19.171 + sudo kill -9 00:00:19.171 + true 00:00:19.183 [Pipeline] cleanWs 00:00:19.192 [WS-CLEANUP] Deleting project workspace... 00:00:19.192 [WS-CLEANUP] Deferred wipeout is used... 00:00:19.198 [WS-CLEANUP] done 00:00:19.203 [Pipeline] setCustomBuildProperty 00:00:19.219 [Pipeline] sh 00:00:19.500 + sudo git config --global --replace-all safe.directory '*' 00:00:19.587 [Pipeline] httpRequest 00:00:20.229 [Pipeline] echo 00:00:20.231 Sorcerer 10.211.164.101 is alive 00:00:20.241 [Pipeline] retry 00:00:20.243 [Pipeline] { 00:00:20.257 [Pipeline] httpRequest 00:00:20.261 HttpMethod: GET 00:00:20.261 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:20.262 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:20.270 Response Code: HTTP/1.1 200 OK 00:00:20.271 Success: Status code 200 is in the accepted range: 200,404 00:00:20.271 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:35.329 [Pipeline] } 00:00:35.348 [Pipeline] // retry 00:00:35.356 [Pipeline] sh 00:00:35.642 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:35.657 [Pipeline] httpRequest 00:00:36.350 [Pipeline] echo 00:00:36.351 Sorcerer 10.211.164.101 is alive 00:00:36.359 [Pipeline] retry 00:00:36.361 [Pipeline] { 00:00:36.373 [Pipeline] httpRequest 00:00:36.377 HttpMethod: GET 00:00:36.377 URL: http://10.211.164.101/packages/spdk_2a72c30695e4695b56236f93da1c2d4993bbb959.tar.gz 00:00:36.378 Sending request to url: http://10.211.164.101/packages/spdk_2a72c30695e4695b56236f93da1c2d4993bbb959.tar.gz 00:00:36.385 Response Code: HTTP/1.1 200 OK 00:00:36.385 Success: Status code 200 is in the accepted range: 200,404 00:00:36.385 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2a72c30695e4695b56236f93da1c2d4993bbb959.tar.gz 00:05:18.449 [Pipeline] } 00:05:18.469 [Pipeline] // retry 00:05:18.478 [Pipeline] sh 00:05:18.766 + tar --no-same-owner -xf spdk_2a72c30695e4695b56236f93da1c2d4993bbb959.tar.gz 00:05:21.328 [Pipeline] sh 00:05:21.662 + git -C spdk log --oneline -n5 00:05:21.662 2a72c3069 nvme/poll_group: create and manage fd_group for nvme poll group 00:05:21.662 699078603 thread: Extended options for spdk_interrupt_register 00:05:21.662 7868e657c util: fix total fds to wait for 00:05:21.662 6f7c1eab6 util: handle events for vfio fd type 00:05:21.662 4c93d5931 util: Extended options for spdk_fd_group_add 00:05:21.683 [Pipeline] } 00:05:21.694 [Pipeline] // stage 00:05:21.703 [Pipeline] stage 00:05:21.705 [Pipeline] { (Prepare) 00:05:21.721 [Pipeline] writeFile 00:05:21.737 [Pipeline] sh 00:05:22.024 + logger -p user.info -t JENKINS-CI 00:05:22.039 [Pipeline] sh 00:05:22.325 + logger -p user.info -t JENKINS-CI 00:05:22.337 [Pipeline] sh 00:05:22.622 + cat autorun-spdk.conf 00:05:22.622 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:22.622 SPDK_TEST_NVMF=1 00:05:22.622 SPDK_TEST_NVME_CLI=1 00:05:22.622 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:22.622 SPDK_TEST_NVMF_NICS=e810 00:05:22.622 SPDK_TEST_VFIOUSER=1 00:05:22.622 SPDK_RUN_UBSAN=1 00:05:22.622 NET_TYPE=phy 00:05:22.630 RUN_NIGHTLY=0 00:05:22.634 [Pipeline] readFile 00:05:22.658 [Pipeline] withEnv 00:05:22.660 [Pipeline] { 00:05:22.672 [Pipeline] sh 00:05:22.959 + set -ex 00:05:22.959 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:22.959 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:22.959 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:22.959 ++ SPDK_TEST_NVMF=1 00:05:22.959 ++ SPDK_TEST_NVME_CLI=1 00:05:22.959 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:22.959 ++ SPDK_TEST_NVMF_NICS=e810 00:05:22.959 ++ SPDK_TEST_VFIOUSER=1 00:05:22.959 ++ SPDK_RUN_UBSAN=1 00:05:22.959 ++ NET_TYPE=phy 00:05:22.959 ++ RUN_NIGHTLY=0 00:05:22.959 + case $SPDK_TEST_NVMF_NICS in 00:05:22.959 + DRIVERS=ice 00:05:22.959 + [[ tcp == \r\d\m\a ]] 00:05:22.959 + [[ -n ice ]] 00:05:22.959 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:22.959 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:22.959 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:22.959 rmmod: ERROR: Module irdma is not currently loaded 00:05:22.959 rmmod: ERROR: Module i40iw is not currently loaded 00:05:22.959 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:22.959 + true 00:05:22.959 + for D in $DRIVERS 00:05:22.959 + sudo modprobe ice 00:05:22.959 + exit 0 00:05:22.969 [Pipeline] } 00:05:22.984 [Pipeline] // withEnv 00:05:22.990 [Pipeline] } 00:05:23.006 [Pipeline] // stage 00:05:23.015 [Pipeline] catchError 00:05:23.017 [Pipeline] { 00:05:23.029 [Pipeline] timeout 00:05:23.029 Timeout set to expire in 1 hr 0 min 00:05:23.031 [Pipeline] { 00:05:23.043 [Pipeline] stage 00:05:23.045 [Pipeline] { (Tests) 00:05:23.060 [Pipeline] sh 00:05:23.350 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:23.350 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:23.350 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:23.350 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:23.350 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.350 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:23.350 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:23.350 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:23.350 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:23.350 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:23.350 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:23.350 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:23.350 + source /etc/os-release 00:05:23.350 ++ NAME='Fedora Linux' 00:05:23.350 ++ VERSION='39 (Cloud Edition)' 00:05:23.350 ++ ID=fedora 00:05:23.350 ++ VERSION_ID=39 00:05:23.350 ++ VERSION_CODENAME= 00:05:23.350 ++ PLATFORM_ID=platform:f39 00:05:23.350 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:23.350 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:23.350 ++ LOGO=fedora-logo-icon 00:05:23.350 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:23.350 ++ HOME_URL=https://fedoraproject.org/ 00:05:23.350 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:23.350 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:23.350 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:23.350 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:23.350 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:23.350 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:23.350 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:23.351 ++ SUPPORT_END=2024-11-12 00:05:23.351 ++ VARIANT='Cloud Edition' 00:05:23.351 ++ VARIANT_ID=cloud 00:05:23.351 + uname -a 00:05:23.351 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:23.351 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:25.891 Hugepages 00:05:25.891 node hugesize free / total 00:05:25.891 node0 1048576kB 0 / 0 00:05:25.891 node0 2048kB 0 / 0 00:05:25.891 node1 1048576kB 0 / 0 00:05:25.891 node1 2048kB 0 / 0 00:05:25.891 00:05:25.891 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.891 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:25.891 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:25.891 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:25.891 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:25.891 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:25.891 + rm -f /tmp/spdk-ld-path 00:05:25.891 + source autorun-spdk.conf 00:05:25.891 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.891 ++ SPDK_TEST_NVMF=1 00:05:25.891 ++ SPDK_TEST_NVME_CLI=1 00:05:25.891 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:25.891 ++ SPDK_TEST_NVMF_NICS=e810 00:05:25.891 ++ SPDK_TEST_VFIOUSER=1 00:05:25.891 ++ SPDK_RUN_UBSAN=1 00:05:25.891 ++ NET_TYPE=phy 00:05:25.891 ++ RUN_NIGHTLY=0 00:05:25.891 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:25.891 + [[ -n '' ]] 00:05:25.891 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.891 + for M in /var/spdk/build-*-manifest.txt 00:05:25.891 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:25.891 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:25.891 + for M in /var/spdk/build-*-manifest.txt 00:05:25.891 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:25.891 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:25.891 + for M in /var/spdk/build-*-manifest.txt 00:05:25.891 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:25.891 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:25.891 ++ uname 00:05:25.891 + [[ Linux == \L\i\n\u\x ]] 00:05:25.891 + sudo dmesg -T 00:05:26.152 + sudo dmesg --clear 00:05:26.152 + dmesg_pid=837751 00:05:26.152 + [[ Fedora Linux == FreeBSD ]] 00:05:26.152 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:26.152 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:26.152 + sudo dmesg -Tw 00:05:26.152 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:26.152 + [[ -x /usr/src/fio-static/fio ]] 00:05:26.152 + export FIO_BIN=/usr/src/fio-static/fio 00:05:26.152 + FIO_BIN=/usr/src/fio-static/fio 00:05:26.152 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:26.152 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:26.152 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:26.152 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:26.152 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:26.152 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:26.152 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:26.152 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:26.152 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:26.152 Test configuration: 00:05:26.152 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:26.152 SPDK_TEST_NVMF=1 00:05:26.152 SPDK_TEST_NVME_CLI=1 00:05:26.152 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:26.152 SPDK_TEST_NVMF_NICS=e810 00:05:26.152 SPDK_TEST_VFIOUSER=1 00:05:26.152 SPDK_RUN_UBSAN=1 00:05:26.152 NET_TYPE=phy 00:05:26.152 RUN_NIGHTLY=0 17:22:25 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:26.152 17:22:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.152 17:22:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:26.152 17:22:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:26.152 17:22:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.152 17:22:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.152 17:22:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.152 17:22:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.152 17:22:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.152 17:22:25 -- paths/export.sh@5 -- $ export PATH 00:05:26.152 17:22:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.152 17:22:25 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:26.152 17:22:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:26.152 17:22:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728919345.XXXXXX 00:05:26.152 17:22:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728919345.a8d7CO 00:05:26.152 17:22:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:26.152 17:22:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:26.152 17:22:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:26.152 17:22:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:26.152 17:22:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:26.152 17:22:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:26.152 17:22:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:26.152 17:22:25 -- common/autotest_common.sh@10 -- $ set +x 00:05:26.152 17:22:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:26.152 17:22:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:26.152 17:22:25 -- pm/common@17 -- $ local monitor 00:05:26.152 17:22:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.152 17:22:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.152 17:22:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.152 17:22:25 -- pm/common@21 -- $ date +%s 00:05:26.152 17:22:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.152 17:22:25 -- pm/common@21 -- $ date +%s 00:05:26.152 17:22:25 -- pm/common@25 -- $ sleep 1 00:05:26.152 17:22:25 -- pm/common@21 -- $ date +%s 00:05:26.152 17:22:25 -- pm/common@21 -- $ date +%s 00:05:26.152 17:22:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919345 00:05:26.152 17:22:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919345 00:05:26.152 17:22:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919345 00:05:26.153 17:22:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728919345 00:05:26.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919345_collect-vmstat.pm.log 00:05:26.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919345_collect-cpu-load.pm.log 00:05:26.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919345_collect-cpu-temp.pm.log 00:05:26.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728919345_collect-bmc-pm.bmc.pm.log 00:05:27.091 17:22:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:27.091 17:22:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:27.091 17:22:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:27.091 17:22:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.091 17:22:26 -- spdk/autobuild.sh@16 -- $ date -u 00:05:27.091 Mon Oct 14 03:22:26 PM UTC 2024 00:05:27.091 17:22:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:27.091 v25.01-pre-76-g2a72c3069 00:05:27.091 17:22:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:27.091 17:22:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:27.091 17:22:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:27.091 17:22:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:27.091 17:22:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:27.091 17:22:26 -- common/autotest_common.sh@10 -- $ set +x 00:05:27.350 ************************************ 00:05:27.350 START TEST ubsan 00:05:27.350 ************************************ 00:05:27.350 17:22:26 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:05:27.350 using ubsan 00:05:27.350 00:05:27.350 real 0m0.000s 00:05:27.350 user 0m0.000s 00:05:27.350 sys 0m0.000s 00:05:27.350 17:22:26 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:27.350 17:22:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:27.350 ************************************ 00:05:27.350 END TEST ubsan 00:05:27.350 ************************************ 00:05:27.350 17:22:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:27.350 17:22:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:27.350 17:22:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:27.350 17:22:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:27.350 17:22:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:27.350 17:22:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:27.350 17:22:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:27.350 17:22:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:27.351 17:22:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:27.351 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:27.351 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:27.919 Using 'verbs' RDMA provider 00:05:40.708 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:52.927 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:52.927 Creating mk/config.mk...done. 00:05:52.927 Creating mk/cc.flags.mk...done. 00:05:52.927 Type 'make' to build. 00:05:52.927 17:22:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:05:52.927 17:22:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:52.927 17:22:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:52.927 17:22:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:52.927 ************************************ 00:05:52.927 START TEST make 00:05:52.927 ************************************ 00:05:52.927 17:22:51 make -- common/autotest_common.sh@1125 -- $ make -j96 00:05:53.186 make[1]: Nothing to be done for 'all'. 00:05:54.570 The Meson build system 00:05:54.570 Version: 1.5.0 00:05:54.570 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:54.570 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:54.570 Build type: native build 00:05:54.570 Project name: libvfio-user 00:05:54.570 Project version: 0.0.1 00:05:54.570 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:54.570 C linker for the host machine: cc ld.bfd 2.40-14 00:05:54.570 Host machine cpu family: x86_64 00:05:54.570 Host machine cpu: x86_64 00:05:54.570 Run-time dependency threads found: YES 00:05:54.570 Library dl found: YES 00:05:54.570 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:54.570 Run-time dependency json-c found: YES 0.17 00:05:54.570 Run-time dependency cmocka found: YES 1.1.7 00:05:54.570 Program pytest-3 found: NO 00:05:54.570 Program flake8 found: NO 00:05:54.570 Program misspell-fixer found: NO 00:05:54.570 Program restructuredtext-lint found: NO 00:05:54.570 Program valgrind found: YES (/usr/bin/valgrind) 00:05:54.570 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:54.570 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:54.570 Compiler for C supports arguments -Wwrite-strings: YES 00:05:54.570 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:54.570 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:54.570 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:54.570 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:54.570 Build targets in project: 8 00:05:54.570 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:54.570 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:54.570 00:05:54.570 libvfio-user 0.0.1 00:05:54.570 00:05:54.570 User defined options 00:05:54.570 buildtype : debug 00:05:54.570 default_library: shared 00:05:54.570 libdir : /usr/local/lib 00:05:54.570 00:05:54.570 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:55.137 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:55.137 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:55.137 [2/37] Compiling C object samples/null.p/null.c.o 00:05:55.137 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:55.137 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:55.137 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:55.137 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:55.137 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:55.137 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:55.137 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:55.137 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:55.137 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:55.137 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:55.137 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:55.137 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:55.137 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:55.137 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:55.137 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:55.137 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:55.397 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:55.397 [20/37] Compiling C object samples/server.p/server.c.o 00:05:55.397 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:55.397 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:55.397 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:55.397 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:55.397 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:55.397 [26/37] Compiling C object samples/client.p/client.c.o 00:05:55.397 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:55.397 [28/37] Linking target samples/client 00:05:55.397 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:55.397 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:55.397 [31/37] Linking target test/unit_tests 00:05:55.655 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:55.655 [33/37] Linking target samples/gpio-pci-idio-16 00:05:55.655 [34/37] Linking target samples/shadow_ioeventfd_server 00:05:55.655 [35/37] Linking target samples/server 00:05:55.655 [36/37] Linking target samples/lspci 00:05:55.655 [37/37] Linking target samples/null 00:05:55.655 INFO: autodetecting backend as ninja 00:05:55.655 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:55.655 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:55.913 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:55.913 ninja: no work to do. 00:06:01.190 The Meson build system 00:06:01.190 Version: 1.5.0 00:06:01.190 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:01.190 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:01.191 Build type: native build 00:06:01.191 Program cat found: YES (/usr/bin/cat) 00:06:01.191 Project name: DPDK 00:06:01.191 Project version: 24.03.0 00:06:01.191 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:01.191 C linker for the host machine: cc ld.bfd 2.40-14 00:06:01.191 Host machine cpu family: x86_64 00:06:01.191 Host machine cpu: x86_64 00:06:01.191 Message: ## Building in Developer Mode ## 00:06:01.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:01.191 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:01.191 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:01.191 Program python3 found: YES (/usr/bin/python3) 00:06:01.191 Program cat found: YES (/usr/bin/cat) 00:06:01.191 Compiler for C supports arguments -march=native: YES 00:06:01.191 Checking for size of "void *" : 8 00:06:01.191 Checking for size of "void *" : 8 (cached) 00:06:01.191 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:01.191 Library m found: YES 00:06:01.191 Library numa found: YES 00:06:01.191 Has header "numaif.h" : YES 00:06:01.191 Library fdt found: NO 00:06:01.191 Library execinfo found: NO 00:06:01.191 Has header "execinfo.h" : YES 00:06:01.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:01.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:01.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:01.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:01.191 Run-time dependency openssl found: YES 3.1.1 00:06:01.191 Run-time dependency libpcap found: YES 1.10.4 00:06:01.191 Has header "pcap.h" with dependency libpcap: YES 00:06:01.191 Compiler for C supports arguments -Wcast-qual: YES 00:06:01.191 Compiler for C supports arguments -Wdeprecated: YES 00:06:01.191 Compiler for C supports arguments -Wformat: YES 00:06:01.191 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:01.191 Compiler for C supports arguments -Wformat-security: NO 00:06:01.191 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:01.191 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:01.191 Compiler for C supports arguments -Wnested-externs: YES 00:06:01.191 Compiler for C supports arguments -Wold-style-definition: YES 00:06:01.191 Compiler for C supports arguments -Wpointer-arith: YES 00:06:01.191 Compiler for C supports arguments -Wsign-compare: YES 00:06:01.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:01.191 Compiler for C supports arguments -Wundef: YES 00:06:01.191 Compiler for C supports arguments -Wwrite-strings: YES 00:06:01.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:01.191 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:01.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:01.191 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:01.191 Program objdump found: YES (/usr/bin/objdump) 00:06:01.191 Compiler for C supports arguments -mavx512f: YES 00:06:01.191 Checking if "AVX512 checking" compiles: YES 00:06:01.191 Fetching value of define "__SSE4_2__" : 1 00:06:01.191 Fetching value of define "__AES__" : 1 00:06:01.191 Fetching value of define "__AVX__" : 1 00:06:01.191 Fetching value of define "__AVX2__" : 1 00:06:01.191 Fetching value of define "__AVX512BW__" : 1 00:06:01.191 Fetching value of define "__AVX512CD__" : 1 00:06:01.191 Fetching value of define "__AVX512DQ__" : 1 00:06:01.191 Fetching value of define "__AVX512F__" : 1 00:06:01.191 Fetching value of define "__AVX512VL__" : 1 00:06:01.191 Fetching value of define "__PCLMUL__" : 1 00:06:01.191 Fetching value of define "__RDRND__" : 1 00:06:01.191 Fetching value of define "__RDSEED__" : 1 00:06:01.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:01.191 Fetching value of define "__znver1__" : (undefined) 00:06:01.191 Fetching value of define "__znver2__" : (undefined) 00:06:01.191 Fetching value of define "__znver3__" : (undefined) 00:06:01.191 Fetching value of define "__znver4__" : (undefined) 00:06:01.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:01.191 Message: lib/log: Defining dependency "log" 00:06:01.191 Message: lib/kvargs: Defining dependency "kvargs" 00:06:01.191 Message: lib/telemetry: Defining dependency "telemetry" 00:06:01.191 Checking for function "getentropy" : NO 00:06:01.191 Message: lib/eal: Defining dependency "eal" 00:06:01.191 Message: lib/ring: Defining dependency "ring" 00:06:01.191 Message: lib/rcu: Defining dependency "rcu" 00:06:01.191 Message: lib/mempool: Defining dependency "mempool" 00:06:01.191 Message: lib/mbuf: Defining dependency "mbuf" 00:06:01.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:01.191 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:01.191 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:01.191 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:01.191 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:01.191 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:01.191 Compiler for C supports arguments -mpclmul: YES 00:06:01.191 Compiler for C supports arguments -maes: YES 00:06:01.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:01.191 Compiler for C supports arguments -mavx512bw: YES 00:06:01.191 Compiler for C supports arguments -mavx512dq: YES 00:06:01.191 Compiler for C supports arguments -mavx512vl: YES 00:06:01.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:01.191 Compiler for C supports arguments -mavx2: YES 00:06:01.191 Compiler for C supports arguments -mavx: YES 00:06:01.191 Message: lib/net: Defining dependency "net" 00:06:01.191 Message: lib/meter: Defining dependency "meter" 00:06:01.191 Message: lib/ethdev: Defining dependency "ethdev" 00:06:01.191 Message: lib/pci: Defining dependency "pci" 00:06:01.191 Message: lib/cmdline: Defining dependency "cmdline" 00:06:01.191 Message: lib/hash: Defining dependency "hash" 00:06:01.191 Message: lib/timer: Defining dependency "timer" 00:06:01.191 Message: lib/compressdev: Defining dependency "compressdev" 00:06:01.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:01.191 Message: lib/dmadev: Defining dependency "dmadev" 00:06:01.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:01.191 Message: lib/power: Defining dependency "power" 00:06:01.191 Message: lib/reorder: Defining dependency "reorder" 00:06:01.191 Message: lib/security: Defining dependency "security" 00:06:01.191 Has header "linux/userfaultfd.h" : YES 00:06:01.191 Has header "linux/vduse.h" : YES 00:06:01.191 Message: lib/vhost: Defining dependency "vhost" 00:06:01.191 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:01.191 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:01.191 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:01.191 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:01.191 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:01.191 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:01.191 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:01.191 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:01.191 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:01.191 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:01.191 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:01.191 Configuring doxy-api-html.conf using configuration 00:06:01.191 Configuring doxy-api-man.conf using configuration 00:06:01.191 Program mandb found: YES (/usr/bin/mandb) 00:06:01.191 Program sphinx-build found: NO 00:06:01.191 Configuring rte_build_config.h using configuration 00:06:01.191 Message: 00:06:01.191 ================= 00:06:01.191 Applications Enabled 00:06:01.191 ================= 00:06:01.191 00:06:01.191 apps: 00:06:01.191 00:06:01.191 00:06:01.191 Message: 00:06:01.191 ================= 00:06:01.191 Libraries Enabled 00:06:01.191 ================= 00:06:01.191 00:06:01.191 libs: 00:06:01.191 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:01.191 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:01.191 cryptodev, dmadev, power, reorder, security, vhost, 00:06:01.191 00:06:01.191 Message: 00:06:01.191 =============== 00:06:01.191 Drivers Enabled 00:06:01.191 =============== 00:06:01.191 00:06:01.191 common: 00:06:01.191 00:06:01.191 bus: 00:06:01.191 pci, vdev, 00:06:01.191 mempool: 00:06:01.191 ring, 00:06:01.191 dma: 00:06:01.191 00:06:01.191 net: 00:06:01.191 00:06:01.191 crypto: 00:06:01.191 00:06:01.191 compress: 00:06:01.191 00:06:01.191 vdpa: 00:06:01.191 00:06:01.191 00:06:01.191 Message: 00:06:01.191 ================= 00:06:01.191 Content Skipped 00:06:01.192 ================= 00:06:01.192 00:06:01.192 apps: 00:06:01.192 dumpcap: explicitly disabled via build config 00:06:01.192 graph: explicitly disabled via build config 00:06:01.192 pdump: explicitly disabled via build config 00:06:01.192 proc-info: explicitly disabled via build config 00:06:01.192 test-acl: explicitly disabled via build config 00:06:01.192 test-bbdev: explicitly disabled via build config 00:06:01.192 test-cmdline: explicitly disabled via build config 00:06:01.192 test-compress-perf: explicitly disabled via build config 00:06:01.192 test-crypto-perf: explicitly disabled via build config 00:06:01.192 test-dma-perf: explicitly disabled via build config 00:06:01.192 test-eventdev: explicitly disabled via build config 00:06:01.192 test-fib: explicitly disabled via build config 00:06:01.192 test-flow-perf: explicitly disabled via build config 00:06:01.192 test-gpudev: explicitly disabled via build config 00:06:01.192 test-mldev: explicitly disabled via build config 00:06:01.192 test-pipeline: explicitly disabled via build config 00:06:01.192 test-pmd: explicitly disabled via build config 00:06:01.192 test-regex: explicitly disabled via build config 00:06:01.192 test-sad: explicitly disabled via build config 00:06:01.192 test-security-perf: explicitly disabled via build config 00:06:01.192 00:06:01.192 libs: 00:06:01.192 argparse: explicitly disabled via build config 00:06:01.192 metrics: explicitly disabled via build config 00:06:01.192 acl: explicitly disabled via build config 00:06:01.192 bbdev: explicitly disabled via build config 00:06:01.192 bitratestats: explicitly disabled via build config 00:06:01.192 bpf: explicitly disabled via build config 00:06:01.192 cfgfile: explicitly disabled via build config 00:06:01.192 distributor: explicitly disabled via build config 00:06:01.192 efd: explicitly disabled via build config 00:06:01.192 eventdev: explicitly disabled via build config 00:06:01.192 dispatcher: explicitly disabled via build config 00:06:01.192 gpudev: explicitly disabled via build config 00:06:01.192 gro: explicitly disabled via build config 00:06:01.192 gso: explicitly disabled via build config 00:06:01.192 ip_frag: explicitly disabled via build config 00:06:01.192 jobstats: explicitly disabled via build config 00:06:01.192 latencystats: explicitly disabled via build config 00:06:01.192 lpm: explicitly disabled via build config 00:06:01.192 member: explicitly disabled via build config 00:06:01.192 pcapng: explicitly disabled via build config 00:06:01.192 rawdev: explicitly disabled via build config 00:06:01.192 regexdev: explicitly disabled via build config 00:06:01.192 mldev: explicitly disabled via build config 00:06:01.192 rib: explicitly disabled via build config 00:06:01.192 sched: explicitly disabled via build config 00:06:01.192 stack: explicitly disabled via build config 00:06:01.192 ipsec: explicitly disabled via build config 00:06:01.192 pdcp: explicitly disabled via build config 00:06:01.192 fib: explicitly disabled via build config 00:06:01.192 port: explicitly disabled via build config 00:06:01.192 pdump: explicitly disabled via build config 00:06:01.192 table: explicitly disabled via build config 00:06:01.192 pipeline: explicitly disabled via build config 00:06:01.192 graph: explicitly disabled via build config 00:06:01.192 node: explicitly disabled via build config 00:06:01.192 00:06:01.192 drivers: 00:06:01.192 common/cpt: not in enabled drivers build config 00:06:01.192 common/dpaax: not in enabled drivers build config 00:06:01.192 common/iavf: not in enabled drivers build config 00:06:01.192 common/idpf: not in enabled drivers build config 00:06:01.192 common/ionic: not in enabled drivers build config 00:06:01.192 common/mvep: not in enabled drivers build config 00:06:01.192 common/octeontx: not in enabled drivers build config 00:06:01.192 bus/auxiliary: not in enabled drivers build config 00:06:01.192 bus/cdx: not in enabled drivers build config 00:06:01.192 bus/dpaa: not in enabled drivers build config 00:06:01.192 bus/fslmc: not in enabled drivers build config 00:06:01.192 bus/ifpga: not in enabled drivers build config 00:06:01.192 bus/platform: not in enabled drivers build config 00:06:01.192 bus/uacce: not in enabled drivers build config 00:06:01.192 bus/vmbus: not in enabled drivers build config 00:06:01.192 common/cnxk: not in enabled drivers build config 00:06:01.192 common/mlx5: not in enabled drivers build config 00:06:01.192 common/nfp: not in enabled drivers build config 00:06:01.192 common/nitrox: not in enabled drivers build config 00:06:01.192 common/qat: not in enabled drivers build config 00:06:01.192 common/sfc_efx: not in enabled drivers build config 00:06:01.192 mempool/bucket: not in enabled drivers build config 00:06:01.192 mempool/cnxk: not in enabled drivers build config 00:06:01.192 mempool/dpaa: not in enabled drivers build config 00:06:01.192 mempool/dpaa2: not in enabled drivers build config 00:06:01.192 mempool/octeontx: not in enabled drivers build config 00:06:01.192 mempool/stack: not in enabled drivers build config 00:06:01.192 dma/cnxk: not in enabled drivers build config 00:06:01.192 dma/dpaa: not in enabled drivers build config 00:06:01.192 dma/dpaa2: not in enabled drivers build config 00:06:01.192 dma/hisilicon: not in enabled drivers build config 00:06:01.192 dma/idxd: not in enabled drivers build config 00:06:01.192 dma/ioat: not in enabled drivers build config 00:06:01.192 dma/skeleton: not in enabled drivers build config 00:06:01.192 net/af_packet: not in enabled drivers build config 00:06:01.192 net/af_xdp: not in enabled drivers build config 00:06:01.192 net/ark: not in enabled drivers build config 00:06:01.192 net/atlantic: not in enabled drivers build config 00:06:01.192 net/avp: not in enabled drivers build config 00:06:01.192 net/axgbe: not in enabled drivers build config 00:06:01.192 net/bnx2x: not in enabled drivers build config 00:06:01.192 net/bnxt: not in enabled drivers build config 00:06:01.192 net/bonding: not in enabled drivers build config 00:06:01.192 net/cnxk: not in enabled drivers build config 00:06:01.192 net/cpfl: not in enabled drivers build config 00:06:01.192 net/cxgbe: not in enabled drivers build config 00:06:01.192 net/dpaa: not in enabled drivers build config 00:06:01.192 net/dpaa2: not in enabled drivers build config 00:06:01.192 net/e1000: not in enabled drivers build config 00:06:01.192 net/ena: not in enabled drivers build config 00:06:01.192 net/enetc: not in enabled drivers build config 00:06:01.192 net/enetfec: not in enabled drivers build config 00:06:01.192 net/enic: not in enabled drivers build config 00:06:01.192 net/failsafe: not in enabled drivers build config 00:06:01.192 net/fm10k: not in enabled drivers build config 00:06:01.192 net/gve: not in enabled drivers build config 00:06:01.192 net/hinic: not in enabled drivers build config 00:06:01.192 net/hns3: not in enabled drivers build config 00:06:01.192 net/i40e: not in enabled drivers build config 00:06:01.192 net/iavf: not in enabled drivers build config 00:06:01.192 net/ice: not in enabled drivers build config 00:06:01.192 net/idpf: not in enabled drivers build config 00:06:01.192 net/igc: not in enabled drivers build config 00:06:01.192 net/ionic: not in enabled drivers build config 00:06:01.192 net/ipn3ke: not in enabled drivers build config 00:06:01.192 net/ixgbe: not in enabled drivers build config 00:06:01.192 net/mana: not in enabled drivers build config 00:06:01.192 net/memif: not in enabled drivers build config 00:06:01.192 net/mlx4: not in enabled drivers build config 00:06:01.192 net/mlx5: not in enabled drivers build config 00:06:01.192 net/mvneta: not in enabled drivers build config 00:06:01.192 net/mvpp2: not in enabled drivers build config 00:06:01.192 net/netvsc: not in enabled drivers build config 00:06:01.192 net/nfb: not in enabled drivers build config 00:06:01.192 net/nfp: not in enabled drivers build config 00:06:01.192 net/ngbe: not in enabled drivers build config 00:06:01.192 net/null: not in enabled drivers build config 00:06:01.192 net/octeontx: not in enabled drivers build config 00:06:01.192 net/octeon_ep: not in enabled drivers build config 00:06:01.192 net/pcap: not in enabled drivers build config 00:06:01.192 net/pfe: not in enabled drivers build config 00:06:01.192 net/qede: not in enabled drivers build config 00:06:01.192 net/ring: not in enabled drivers build config 00:06:01.192 net/sfc: not in enabled drivers build config 00:06:01.192 net/softnic: not in enabled drivers build config 00:06:01.192 net/tap: not in enabled drivers build config 00:06:01.192 net/thunderx: not in enabled drivers build config 00:06:01.192 net/txgbe: not in enabled drivers build config 00:06:01.192 net/vdev_netvsc: not in enabled drivers build config 00:06:01.192 net/vhost: not in enabled drivers build config 00:06:01.192 net/virtio: not in enabled drivers build config 00:06:01.192 net/vmxnet3: not in enabled drivers build config 00:06:01.192 raw/*: missing internal dependency, "rawdev" 00:06:01.192 crypto/armv8: not in enabled drivers build config 00:06:01.192 crypto/bcmfs: not in enabled drivers build config 00:06:01.192 crypto/caam_jr: not in enabled drivers build config 00:06:01.192 crypto/ccp: not in enabled drivers build config 00:06:01.192 crypto/cnxk: not in enabled drivers build config 00:06:01.193 crypto/dpaa_sec: not in enabled drivers build config 00:06:01.193 crypto/dpaa2_sec: not in enabled drivers build config 00:06:01.193 crypto/ipsec_mb: not in enabled drivers build config 00:06:01.193 crypto/mlx5: not in enabled drivers build config 00:06:01.193 crypto/mvsam: not in enabled drivers build config 00:06:01.193 crypto/nitrox: not in enabled drivers build config 00:06:01.193 crypto/null: not in enabled drivers build config 00:06:01.193 crypto/octeontx: not in enabled drivers build config 00:06:01.193 crypto/openssl: not in enabled drivers build config 00:06:01.193 crypto/scheduler: not in enabled drivers build config 00:06:01.193 crypto/uadk: not in enabled drivers build config 00:06:01.193 crypto/virtio: not in enabled drivers build config 00:06:01.193 compress/isal: not in enabled drivers build config 00:06:01.193 compress/mlx5: not in enabled drivers build config 00:06:01.193 compress/nitrox: not in enabled drivers build config 00:06:01.193 compress/octeontx: not in enabled drivers build config 00:06:01.193 compress/zlib: not in enabled drivers build config 00:06:01.193 regex/*: missing internal dependency, "regexdev" 00:06:01.193 ml/*: missing internal dependency, "mldev" 00:06:01.193 vdpa/ifc: not in enabled drivers build config 00:06:01.193 vdpa/mlx5: not in enabled drivers build config 00:06:01.193 vdpa/nfp: not in enabled drivers build config 00:06:01.193 vdpa/sfc: not in enabled drivers build config 00:06:01.193 event/*: missing internal dependency, "eventdev" 00:06:01.193 baseband/*: missing internal dependency, "bbdev" 00:06:01.193 gpu/*: missing internal dependency, "gpudev" 00:06:01.193 00:06:01.193 00:06:01.193 Build targets in project: 85 00:06:01.193 00:06:01.193 DPDK 24.03.0 00:06:01.193 00:06:01.193 User defined options 00:06:01.193 buildtype : debug 00:06:01.193 default_library : shared 00:06:01.193 libdir : lib 00:06:01.193 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:01.193 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:01.193 c_link_args : 00:06:01.193 cpu_instruction_set: native 00:06:01.193 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:06:01.193 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:06:01.193 enable_docs : false 00:06:01.193 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:01.193 enable_kmods : false 00:06:01.193 max_lcores : 128 00:06:01.193 tests : false 00:06:01.193 00:06:01.193 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:01.771 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:01.771 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:01.771 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:01.771 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:01.771 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:01.771 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:01.771 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:01.771 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:01.771 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:01.771 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:01.771 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:01.771 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:01.771 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:02.037 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:02.037 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:02.037 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:02.037 [16/268] Linking static target lib/librte_kvargs.a 00:06:02.037 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:02.037 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:02.037 [19/268] Linking static target lib/librte_log.a 00:06:02.037 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:02.037 [21/268] Linking static target lib/librte_pci.a 00:06:02.037 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:02.037 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:02.037 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:02.302 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:02.302 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:02.302 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:02.302 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:02.302 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:02.302 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:02.302 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:02.302 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:02.302 [33/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:02.302 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:02.302 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:02.302 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:02.302 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:02.302 [38/268] Linking static target lib/librte_meter.a 00:06:02.302 [39/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:02.302 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:02.302 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:02.302 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:02.302 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:02.302 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:02.302 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:02.302 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:02.302 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:02.302 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:02.302 [49/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:02.302 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:02.302 [51/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:02.302 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:02.302 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:02.302 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:02.302 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:02.302 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:02.302 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:02.302 [58/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:02.302 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:02.302 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:02.302 [61/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:02.302 [62/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:02.302 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:02.302 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:02.302 [65/268] Linking static target lib/librte_ring.a 00:06:02.302 [66/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:02.302 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:02.302 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:02.302 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:02.302 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:02.302 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:02.302 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:02.562 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:02.562 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:02.562 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:02.562 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:02.562 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:02.562 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:02.562 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:02.562 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:02.563 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:02.563 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:02.563 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:02.563 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:02.563 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:02.563 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:02.563 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:02.563 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:02.563 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:02.563 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:02.563 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:02.563 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:02.563 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:02.563 [94/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.563 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:02.563 [96/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:02.563 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:02.563 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:02.563 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:02.563 [100/268] Linking static target lib/librte_telemetry.a 00:06:02.563 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:02.563 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:02.563 [103/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:02.563 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:02.563 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:02.563 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:02.563 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:02.563 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:02.563 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:02.563 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:02.563 [111/268] Linking static target lib/librte_net.a 00:06:02.563 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:02.563 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:02.563 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:02.563 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:02.563 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:02.563 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.563 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:02.563 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:02.563 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:02.563 [121/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:02.563 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:02.563 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:02.563 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:02.563 [125/268] Linking static target lib/librte_eal.a 00:06:02.563 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:02.563 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:02.563 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:02.563 [129/268] Linking static target lib/librte_cmdline.a 00:06:02.563 [130/268] Linking static target lib/librte_rcu.a 00:06:02.563 [131/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:02.563 [132/268] Linking static target lib/librte_mempool.a 00:06:02.563 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.563 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:02.822 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.822 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:02.822 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:02.822 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.822 [139/268] Linking static target lib/librte_mbuf.a 00:06:02.822 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:02.822 [141/268] Linking target lib/librte_log.so.24.1 00:06:02.822 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:02.822 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:02.822 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:02.822 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.822 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:02.822 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:02.822 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:02.822 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:02.822 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:02.822 [151/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:02.822 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:02.822 [153/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:02.822 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:02.822 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:02.822 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.822 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:02.822 [158/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:02.822 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:02.822 [160/268] Linking target lib/librte_kvargs.so.24.1 00:06:02.822 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:02.822 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:02.822 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:02.822 [164/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:02.823 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.823 [166/268] Linking static target lib/librte_timer.a 00:06:03.082 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:03.082 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:03.082 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:03.082 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:03.082 [171/268] Linking target lib/librte_telemetry.so.24.1 00:06:03.082 [172/268] Linking static target lib/librte_compressdev.a 00:06:03.082 [173/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:03.082 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:03.082 [175/268] Linking static target lib/librte_dmadev.a 00:06:03.082 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:03.082 [177/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:03.082 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:03.082 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:03.082 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:03.082 [181/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:03.082 [182/268] Linking static target lib/librte_power.a 00:06:03.082 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:03.082 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:03.082 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:03.082 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:03.082 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:03.082 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:03.082 [189/268] Linking static target lib/librte_reorder.a 00:06:03.082 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:03.082 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:03.082 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:03.082 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:03.082 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:03.082 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:03.082 [196/268] Linking static target drivers/librte_bus_vdev.a 00:06:03.082 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:03.082 [198/268] Linking static target lib/librte_security.a 00:06:03.082 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:03.082 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:03.082 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:03.082 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:03.082 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:03.342 [204/268] Linking static target drivers/librte_mempool_ring.a 00:06:03.342 [205/268] Linking static target lib/librte_hash.a 00:06:03.342 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:03.342 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:03.342 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:03.342 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:03.342 [210/268] Linking static target drivers/librte_bus_pci.a 00:06:03.342 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:03.342 [212/268] Linking static target lib/librte_cryptodev.a 00:06:03.342 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.342 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.342 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.601 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.601 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.601 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:03.601 [219/268] Linking static target lib/librte_ethdev.a 00:06:03.601 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.601 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.602 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.860 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.860 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.860 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:04.119 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.119 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.056 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:05.056 [229/268] Linking static target lib/librte_vhost.a 00:06:05.315 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.219 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.491 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.491 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.491 [234/268] Linking target lib/librte_eal.so.24.1 00:06:12.749 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:12.749 [236/268] Linking target lib/librte_meter.so.24.1 00:06:12.749 [237/268] Linking target lib/librte_ring.so.24.1 00:06:12.749 [238/268] Linking target lib/librte_pci.so.24.1 00:06:12.749 [239/268] Linking target lib/librte_dmadev.so.24.1 00:06:12.749 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:12.749 [241/268] Linking target lib/librte_timer.so.24.1 00:06:13.007 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:13.007 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:13.007 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:13.007 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:13.007 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:13.007 [247/268] Linking target lib/librte_rcu.so.24.1 00:06:13.007 [248/268] Linking target lib/librte_mempool.so.24.1 00:06:13.007 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:13.007 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:13.007 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:13.007 [252/268] Linking target lib/librte_mbuf.so.24.1 00:06:13.007 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:13.265 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:13.265 [255/268] Linking target lib/librte_reorder.so.24.1 00:06:13.265 [256/268] Linking target lib/librte_net.so.24.1 00:06:13.265 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:06:13.265 [258/268] Linking target lib/librte_compressdev.so.24.1 00:06:13.524 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:13.524 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:13.524 [261/268] Linking target lib/librte_hash.so.24.1 00:06:13.524 [262/268] Linking target lib/librte_security.so.24.1 00:06:13.524 [263/268] Linking target lib/librte_cmdline.so.24.1 00:06:13.524 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:13.524 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:13.524 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:13.783 [267/268] Linking target lib/librte_power.so.24.1 00:06:13.783 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:13.783 INFO: autodetecting backend as ninja 00:06:13.783 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:06:25.996 CC lib/log/log.o 00:06:25.996 CC lib/log/log_flags.o 00:06:25.996 CC lib/log/log_deprecated.o 00:06:25.996 CC lib/ut_mock/mock.o 00:06:25.996 CC lib/ut/ut.o 00:06:25.996 LIB libspdk_log.a 00:06:25.996 LIB libspdk_ut.a 00:06:25.996 LIB libspdk_ut_mock.a 00:06:25.996 SO libspdk_ut_mock.so.6.0 00:06:25.996 SO libspdk_ut.so.2.0 00:06:25.996 SO libspdk_log.so.7.1 00:06:25.996 SYMLINK libspdk_log.so 00:06:25.996 SYMLINK libspdk_ut_mock.so 00:06:25.996 SYMLINK libspdk_ut.so 00:06:25.996 CC lib/dma/dma.o 00:06:25.996 CC lib/util/base64.o 00:06:25.996 CXX lib/trace_parser/trace.o 00:06:25.996 CC lib/util/bit_array.o 00:06:25.996 CC lib/util/cpuset.o 00:06:25.996 CC lib/ioat/ioat.o 00:06:25.996 CC lib/util/crc16.o 00:06:25.996 CC lib/util/crc32.o 00:06:25.996 CC lib/util/crc32c.o 00:06:25.996 CC lib/util/crc32_ieee.o 00:06:25.996 CC lib/util/crc64.o 00:06:25.996 CC lib/util/dif.o 00:06:25.996 CC lib/util/fd.o 00:06:25.996 CC lib/util/fd_group.o 00:06:25.996 CC lib/util/file.o 00:06:25.996 CC lib/util/hexlify.o 00:06:25.996 CC lib/util/iov.o 00:06:25.996 CC lib/util/math.o 00:06:25.996 CC lib/util/net.o 00:06:25.996 CC lib/util/pipe.o 00:06:25.996 CC lib/util/strerror_tls.o 00:06:25.996 CC lib/util/string.o 00:06:25.996 CC lib/util/uuid.o 00:06:25.996 CC lib/util/xor.o 00:06:25.996 CC lib/util/zipf.o 00:06:25.996 CC lib/util/md5.o 00:06:25.996 CC lib/vfio_user/host/vfio_user_pci.o 00:06:25.996 CC lib/vfio_user/host/vfio_user.o 00:06:25.996 LIB libspdk_dma.a 00:06:25.996 SO libspdk_dma.so.5.0 00:06:25.996 LIB libspdk_ioat.a 00:06:25.996 SYMLINK libspdk_dma.so 00:06:25.996 SO libspdk_ioat.so.7.0 00:06:25.996 SYMLINK libspdk_ioat.so 00:06:25.996 LIB libspdk_vfio_user.a 00:06:25.996 SO libspdk_vfio_user.so.5.0 00:06:25.996 LIB libspdk_util.a 00:06:25.996 SYMLINK libspdk_vfio_user.so 00:06:25.996 SO libspdk_util.so.10.1 00:06:25.996 SYMLINK libspdk_util.so 00:06:25.996 LIB libspdk_trace_parser.a 00:06:25.996 SO libspdk_trace_parser.so.6.0 00:06:25.996 SYMLINK libspdk_trace_parser.so 00:06:25.996 CC lib/json/json_parse.o 00:06:25.996 CC lib/json/json_util.o 00:06:25.996 CC lib/json/json_write.o 00:06:25.996 CC lib/vmd/vmd.o 00:06:25.996 CC lib/vmd/led.o 00:06:25.996 CC lib/env_dpdk/env.o 00:06:25.996 CC lib/env_dpdk/memory.o 00:06:25.996 CC lib/rdma_provider/common.o 00:06:25.996 CC lib/env_dpdk/pci.o 00:06:25.996 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:25.996 CC lib/env_dpdk/init.o 00:06:25.996 CC lib/idxd/idxd.o 00:06:25.996 CC lib/rdma_utils/rdma_utils.o 00:06:25.996 CC lib/env_dpdk/threads.o 00:06:25.996 CC lib/idxd/idxd_user.o 00:06:25.996 CC lib/env_dpdk/pci_ioat.o 00:06:25.996 CC lib/conf/conf.o 00:06:25.996 CC lib/idxd/idxd_kernel.o 00:06:25.996 CC lib/env_dpdk/pci_virtio.o 00:06:25.996 CC lib/env_dpdk/pci_vmd.o 00:06:25.996 CC lib/env_dpdk/pci_idxd.o 00:06:25.996 CC lib/env_dpdk/pci_event.o 00:06:25.996 CC lib/env_dpdk/sigbus_handler.o 00:06:25.996 CC lib/env_dpdk/pci_dpdk.o 00:06:25.996 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:25.996 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:25.996 LIB libspdk_rdma_provider.a 00:06:25.996 LIB libspdk_conf.a 00:06:25.996 SO libspdk_rdma_provider.so.6.0 00:06:25.996 SO libspdk_conf.so.6.0 00:06:25.996 LIB libspdk_json.a 00:06:25.996 LIB libspdk_rdma_utils.a 00:06:25.996 SYMLINK libspdk_rdma_provider.so 00:06:25.996 SO libspdk_json.so.6.0 00:06:25.996 SYMLINK libspdk_conf.so 00:06:25.996 SO libspdk_rdma_utils.so.1.0 00:06:25.996 SYMLINK libspdk_json.so 00:06:25.996 SYMLINK libspdk_rdma_utils.so 00:06:26.255 LIB libspdk_vmd.a 00:06:26.255 LIB libspdk_idxd.a 00:06:26.255 SO libspdk_vmd.so.6.0 00:06:26.255 SO libspdk_idxd.so.12.1 00:06:26.255 SYMLINK libspdk_vmd.so 00:06:26.255 SYMLINK libspdk_idxd.so 00:06:26.255 CC lib/jsonrpc/jsonrpc_server.o 00:06:26.255 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:26.255 CC lib/jsonrpc/jsonrpc_client.o 00:06:26.255 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:26.514 LIB libspdk_jsonrpc.a 00:06:26.514 SO libspdk_jsonrpc.so.6.0 00:06:26.773 SYMLINK libspdk_jsonrpc.so 00:06:26.773 LIB libspdk_env_dpdk.a 00:06:26.773 SO libspdk_env_dpdk.so.15.1 00:06:26.773 SYMLINK libspdk_env_dpdk.so 00:06:27.032 CC lib/rpc/rpc.o 00:06:27.032 LIB libspdk_rpc.a 00:06:27.292 SO libspdk_rpc.so.6.0 00:06:27.292 SYMLINK libspdk_rpc.so 00:06:27.551 CC lib/keyring/keyring.o 00:06:27.551 CC lib/trace/trace.o 00:06:27.551 CC lib/keyring/keyring_rpc.o 00:06:27.551 CC lib/trace/trace_flags.o 00:06:27.551 CC lib/notify/notify.o 00:06:27.551 CC lib/trace/trace_rpc.o 00:06:27.551 CC lib/notify/notify_rpc.o 00:06:27.811 LIB libspdk_notify.a 00:06:27.811 LIB libspdk_keyring.a 00:06:27.811 SO libspdk_notify.so.6.0 00:06:27.811 LIB libspdk_trace.a 00:06:27.811 SO libspdk_keyring.so.2.0 00:06:27.811 SYMLINK libspdk_notify.so 00:06:27.811 SO libspdk_trace.so.11.0 00:06:27.811 SYMLINK libspdk_keyring.so 00:06:27.811 SYMLINK libspdk_trace.so 00:06:28.381 CC lib/thread/thread.o 00:06:28.381 CC lib/thread/iobuf.o 00:06:28.381 CC lib/sock/sock.o 00:06:28.381 CC lib/sock/sock_rpc.o 00:06:28.381 LIB libspdk_sock.a 00:06:28.640 SO libspdk_sock.so.10.0 00:06:28.640 SYMLINK libspdk_sock.so 00:06:28.898 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:28.898 CC lib/nvme/nvme_ctrlr.o 00:06:28.898 CC lib/nvme/nvme_fabric.o 00:06:28.898 CC lib/nvme/nvme_ns_cmd.o 00:06:28.899 CC lib/nvme/nvme_ns.o 00:06:28.899 CC lib/nvme/nvme_pcie_common.o 00:06:28.899 CC lib/nvme/nvme_pcie.o 00:06:28.899 CC lib/nvme/nvme_qpair.o 00:06:28.899 CC lib/nvme/nvme.o 00:06:28.899 CC lib/nvme/nvme_quirks.o 00:06:28.899 CC lib/nvme/nvme_transport.o 00:06:28.899 CC lib/nvme/nvme_discovery.o 00:06:28.899 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:28.899 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:28.899 CC lib/nvme/nvme_tcp.o 00:06:28.899 CC lib/nvme/nvme_opal.o 00:06:28.899 CC lib/nvme/nvme_io_msg.o 00:06:28.899 CC lib/nvme/nvme_poll_group.o 00:06:28.899 CC lib/nvme/nvme_zns.o 00:06:28.899 CC lib/nvme/nvme_stubs.o 00:06:28.899 CC lib/nvme/nvme_auth.o 00:06:28.899 CC lib/nvme/nvme_cuse.o 00:06:28.899 CC lib/nvme/nvme_vfio_user.o 00:06:28.899 CC lib/nvme/nvme_rdma.o 00:06:29.156 LIB libspdk_thread.a 00:06:29.156 SO libspdk_thread.so.10.2 00:06:29.415 SYMLINK libspdk_thread.so 00:06:29.674 CC lib/blob/blobstore.o 00:06:29.674 CC lib/blob/request.o 00:06:29.674 CC lib/accel/accel.o 00:06:29.674 CC lib/accel/accel_rpc.o 00:06:29.674 CC lib/blob/zeroes.o 00:06:29.674 CC lib/accel/accel_sw.o 00:06:29.674 CC lib/blob/blob_bs_dev.o 00:06:29.674 CC lib/init/json_config.o 00:06:29.674 CC lib/init/subsystem_rpc.o 00:06:29.674 CC lib/init/subsystem.o 00:06:29.674 CC lib/init/rpc.o 00:06:29.674 CC lib/virtio/virtio.o 00:06:29.674 CC lib/virtio/virtio_vhost_user.o 00:06:29.674 CC lib/virtio/virtio_vfio_user.o 00:06:29.674 CC lib/vfu_tgt/tgt_endpoint.o 00:06:29.674 CC lib/fsdev/fsdev.o 00:06:29.674 CC lib/vfu_tgt/tgt_rpc.o 00:06:29.674 CC lib/virtio/virtio_pci.o 00:06:29.674 CC lib/fsdev/fsdev_io.o 00:06:29.674 CC lib/fsdev/fsdev_rpc.o 00:06:29.932 LIB libspdk_init.a 00:06:29.933 SO libspdk_init.so.6.0 00:06:29.933 LIB libspdk_vfu_tgt.a 00:06:29.933 LIB libspdk_virtio.a 00:06:29.933 SYMLINK libspdk_init.so 00:06:29.933 SO libspdk_vfu_tgt.so.3.0 00:06:29.933 SO libspdk_virtio.so.7.0 00:06:29.933 SYMLINK libspdk_vfu_tgt.so 00:06:29.933 SYMLINK libspdk_virtio.so 00:06:30.191 LIB libspdk_fsdev.a 00:06:30.191 SO libspdk_fsdev.so.1.0 00:06:30.191 CC lib/event/app.o 00:06:30.191 CC lib/event/reactor.o 00:06:30.191 CC lib/event/log_rpc.o 00:06:30.191 CC lib/event/app_rpc.o 00:06:30.191 CC lib/event/scheduler_static.o 00:06:30.191 SYMLINK libspdk_fsdev.so 00:06:30.450 LIB libspdk_accel.a 00:06:30.450 SO libspdk_accel.so.16.0 00:06:30.450 SYMLINK libspdk_accel.so 00:06:30.710 LIB libspdk_nvme.a 00:06:30.710 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:30.710 LIB libspdk_event.a 00:06:30.710 SO libspdk_nvme.so.15.0 00:06:30.710 SO libspdk_event.so.14.0 00:06:30.710 SYMLINK libspdk_event.so 00:06:30.969 CC lib/bdev/bdev.o 00:06:30.969 CC lib/bdev/bdev_rpc.o 00:06:30.969 CC lib/bdev/bdev_zone.o 00:06:30.969 CC lib/bdev/part.o 00:06:30.969 CC lib/bdev/scsi_nvme.o 00:06:30.969 SYMLINK libspdk_nvme.so 00:06:30.969 LIB libspdk_fuse_dispatcher.a 00:06:31.229 SO libspdk_fuse_dispatcher.so.1.0 00:06:31.229 SYMLINK libspdk_fuse_dispatcher.so 00:06:31.797 LIB libspdk_blob.a 00:06:31.797 SO libspdk_blob.so.11.0 00:06:31.797 SYMLINK libspdk_blob.so 00:06:32.368 CC lib/blobfs/blobfs.o 00:06:32.368 CC lib/blobfs/tree.o 00:06:32.368 CC lib/lvol/lvol.o 00:06:32.627 LIB libspdk_bdev.a 00:06:32.627 SO libspdk_bdev.so.17.0 00:06:32.887 LIB libspdk_blobfs.a 00:06:32.887 SYMLINK libspdk_bdev.so 00:06:32.887 SO libspdk_blobfs.so.10.0 00:06:32.887 LIB libspdk_lvol.a 00:06:32.887 SO libspdk_lvol.so.10.0 00:06:32.887 SYMLINK libspdk_blobfs.so 00:06:32.887 SYMLINK libspdk_lvol.so 00:06:33.147 CC lib/scsi/dev.o 00:06:33.147 CC lib/scsi/lun.o 00:06:33.147 CC lib/scsi/port.o 00:06:33.147 CC lib/scsi/scsi.o 00:06:33.147 CC lib/scsi/scsi_bdev.o 00:06:33.147 CC lib/ublk/ublk.o 00:06:33.147 CC lib/nvmf/ctrlr.o 00:06:33.147 CC lib/scsi/scsi_pr.o 00:06:33.147 CC lib/scsi/scsi_rpc.o 00:06:33.147 CC lib/nvmf/ctrlr_discovery.o 00:06:33.147 CC lib/ublk/ublk_rpc.o 00:06:33.147 CC lib/ftl/ftl_core.o 00:06:33.147 CC lib/scsi/task.o 00:06:33.147 CC lib/nbd/nbd.o 00:06:33.147 CC lib/nvmf/ctrlr_bdev.o 00:06:33.147 CC lib/ftl/ftl_init.o 00:06:33.147 CC lib/nbd/nbd_rpc.o 00:06:33.147 CC lib/nvmf/subsystem.o 00:06:33.147 CC lib/ftl/ftl_layout.o 00:06:33.147 CC lib/nvmf/nvmf.o 00:06:33.147 CC lib/ftl/ftl_debug.o 00:06:33.147 CC lib/nvmf/nvmf_rpc.o 00:06:33.147 CC lib/nvmf/transport.o 00:06:33.147 CC lib/ftl/ftl_io.o 00:06:33.147 CC lib/nvmf/tcp.o 00:06:33.147 CC lib/ftl/ftl_sb.o 00:06:33.147 CC lib/nvmf/stubs.o 00:06:33.147 CC lib/ftl/ftl_l2p.o 00:06:33.147 CC lib/ftl/ftl_l2p_flat.o 00:06:33.147 CC lib/nvmf/mdns_server.o 00:06:33.147 CC lib/nvmf/vfio_user.o 00:06:33.147 CC lib/ftl/ftl_nv_cache.o 00:06:33.147 CC lib/nvmf/rdma.o 00:06:33.147 CC lib/ftl/ftl_band.o 00:06:33.147 CC lib/nvmf/auth.o 00:06:33.147 CC lib/ftl/ftl_band_ops.o 00:06:33.147 CC lib/ftl/ftl_writer.o 00:06:33.147 CC lib/ftl/ftl_rq.o 00:06:33.147 CC lib/ftl/ftl_reloc.o 00:06:33.147 CC lib/ftl/ftl_l2p_cache.o 00:06:33.147 CC lib/ftl/ftl_p2l.o 00:06:33.147 CC lib/ftl/ftl_p2l_log.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:33.147 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:33.147 CC lib/ftl/utils/ftl_conf.o 00:06:33.147 CC lib/ftl/utils/ftl_md.o 00:06:33.147 CC lib/ftl/utils/ftl_mempool.o 00:06:33.147 CC lib/ftl/utils/ftl_property.o 00:06:33.147 CC lib/ftl/utils/ftl_bitmap.o 00:06:33.147 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:33.147 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:33.147 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:33.147 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:33.147 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:33.147 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:33.147 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:33.147 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:33.147 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:33.147 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:33.147 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:33.147 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:33.147 CC lib/ftl/base/ftl_base_dev.o 00:06:33.147 CC lib/ftl/base/ftl_base_bdev.o 00:06:33.147 CC lib/ftl/ftl_trace.o 00:06:33.147 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:33.714 LIB libspdk_nbd.a 00:06:33.714 LIB libspdk_scsi.a 00:06:33.714 SO libspdk_nbd.so.7.0 00:06:33.714 SO libspdk_scsi.so.9.0 00:06:33.714 LIB libspdk_ublk.a 00:06:33.714 SYMLINK libspdk_nbd.so 00:06:33.714 SO libspdk_ublk.so.3.0 00:06:33.714 SYMLINK libspdk_scsi.so 00:06:33.972 SYMLINK libspdk_ublk.so 00:06:34.230 CC lib/vhost/vhost_rpc.o 00:06:34.230 CC lib/vhost/vhost.o 00:06:34.230 CC lib/vhost/vhost_scsi.o 00:06:34.230 CC lib/vhost/vhost_blk.o 00:06:34.230 CC lib/vhost/rte_vhost_user.o 00:06:34.230 CC lib/iscsi/conn.o 00:06:34.230 CC lib/iscsi/init_grp.o 00:06:34.230 CC lib/iscsi/iscsi.o 00:06:34.230 CC lib/iscsi/param.o 00:06:34.230 CC lib/iscsi/portal_grp.o 00:06:34.230 CC lib/iscsi/tgt_node.o 00:06:34.230 CC lib/iscsi/iscsi_subsystem.o 00:06:34.230 CC lib/iscsi/iscsi_rpc.o 00:06:34.230 CC lib/iscsi/task.o 00:06:34.230 LIB libspdk_ftl.a 00:06:34.230 SO libspdk_ftl.so.9.0 00:06:34.490 SYMLINK libspdk_ftl.so 00:06:35.058 LIB libspdk_vhost.a 00:06:35.058 LIB libspdk_nvmf.a 00:06:35.058 SO libspdk_vhost.so.8.0 00:06:35.058 SO libspdk_nvmf.so.19.0 00:06:35.058 SYMLINK libspdk_vhost.so 00:06:35.058 LIB libspdk_iscsi.a 00:06:35.058 SYMLINK libspdk_nvmf.so 00:06:35.318 SO libspdk_iscsi.so.8.0 00:06:35.318 SYMLINK libspdk_iscsi.so 00:06:35.887 CC module/vfu_device/vfu_virtio.o 00:06:35.887 CC module/vfu_device/vfu_virtio_scsi.o 00:06:35.887 CC module/vfu_device/vfu_virtio_blk.o 00:06:35.887 CC module/env_dpdk/env_dpdk_rpc.o 00:06:35.887 CC module/vfu_device/vfu_virtio_fs.o 00:06:35.887 CC module/vfu_device/vfu_virtio_rpc.o 00:06:35.887 CC module/blob/bdev/blob_bdev.o 00:06:35.887 CC module/accel/error/accel_error.o 00:06:35.887 CC module/accel/error/accel_error_rpc.o 00:06:35.887 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:35.887 CC module/accel/dsa/accel_dsa_rpc.o 00:06:35.887 CC module/accel/dsa/accel_dsa.o 00:06:35.887 CC module/sock/posix/posix.o 00:06:35.887 CC module/fsdev/aio/fsdev_aio.o 00:06:35.887 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:35.887 CC module/fsdev/aio/linux_aio_mgr.o 00:06:35.887 CC module/keyring/file/keyring.o 00:06:35.887 CC module/keyring/file/keyring_rpc.o 00:06:35.887 CC module/keyring/linux/keyring.o 00:06:35.887 CC module/scheduler/gscheduler/gscheduler.o 00:06:35.887 CC module/keyring/linux/keyring_rpc.o 00:06:35.887 CC module/accel/iaa/accel_iaa.o 00:06:35.887 CC module/accel/iaa/accel_iaa_rpc.o 00:06:35.887 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:35.887 CC module/accel/ioat/accel_ioat.o 00:06:35.887 CC module/accel/ioat/accel_ioat_rpc.o 00:06:35.887 LIB libspdk_env_dpdk_rpc.a 00:06:35.887 SO libspdk_env_dpdk_rpc.so.6.0 00:06:36.147 SYMLINK libspdk_env_dpdk_rpc.so 00:06:36.147 LIB libspdk_keyring_linux.a 00:06:36.147 LIB libspdk_scheduler_gscheduler.a 00:06:36.147 LIB libspdk_scheduler_dpdk_governor.a 00:06:36.147 LIB libspdk_keyring_file.a 00:06:36.147 LIB libspdk_accel_error.a 00:06:36.147 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:36.147 SO libspdk_keyring_file.so.2.0 00:06:36.147 SO libspdk_keyring_linux.so.1.0 00:06:36.147 SO libspdk_scheduler_gscheduler.so.4.0 00:06:36.147 LIB libspdk_scheduler_dynamic.a 00:06:36.147 LIB libspdk_accel_ioat.a 00:06:36.147 SO libspdk_accel_error.so.2.0 00:06:36.147 LIB libspdk_accel_iaa.a 00:06:36.147 SO libspdk_scheduler_dynamic.so.4.0 00:06:36.147 SO libspdk_accel_ioat.so.6.0 00:06:36.147 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:36.147 LIB libspdk_blob_bdev.a 00:06:36.147 SYMLINK libspdk_keyring_file.so 00:06:36.147 SYMLINK libspdk_keyring_linux.so 00:06:36.147 SO libspdk_accel_iaa.so.3.0 00:06:36.147 LIB libspdk_accel_dsa.a 00:06:36.147 SYMLINK libspdk_scheduler_gscheduler.so 00:06:36.147 SYMLINK libspdk_accel_error.so 00:06:36.147 SYMLINK libspdk_scheduler_dynamic.so 00:06:36.147 SO libspdk_blob_bdev.so.11.0 00:06:36.147 SO libspdk_accel_dsa.so.5.0 00:06:36.147 SYMLINK libspdk_accel_ioat.so 00:06:36.423 SYMLINK libspdk_accel_iaa.so 00:06:36.423 SYMLINK libspdk_blob_bdev.so 00:06:36.423 SYMLINK libspdk_accel_dsa.so 00:06:36.423 LIB libspdk_vfu_device.a 00:06:36.423 SO libspdk_vfu_device.so.3.0 00:06:36.423 SYMLINK libspdk_vfu_device.so 00:06:36.423 LIB libspdk_fsdev_aio.a 00:06:36.751 SO libspdk_fsdev_aio.so.1.0 00:06:36.751 LIB libspdk_sock_posix.a 00:06:36.751 SO libspdk_sock_posix.so.6.0 00:06:36.751 SYMLINK libspdk_fsdev_aio.so 00:06:36.751 SYMLINK libspdk_sock_posix.so 00:06:36.751 CC module/bdev/gpt/gpt.o 00:06:36.751 CC module/bdev/gpt/vbdev_gpt.o 00:06:36.751 CC module/blobfs/bdev/blobfs_bdev.o 00:06:36.751 CC module/bdev/delay/vbdev_delay.o 00:06:36.751 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:36.751 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:36.751 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:36.751 CC module/bdev/lvol/vbdev_lvol.o 00:06:36.751 CC module/bdev/null/bdev_null.o 00:06:36.751 CC module/bdev/null/bdev_null_rpc.o 00:06:36.751 CC module/bdev/malloc/bdev_malloc.o 00:06:36.751 CC module/bdev/split/vbdev_split.o 00:06:36.751 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:36.751 CC module/bdev/split/vbdev_split_rpc.o 00:06:36.751 CC module/bdev/error/vbdev_error.o 00:06:36.751 CC module/bdev/passthru/vbdev_passthru.o 00:06:36.751 CC module/bdev/error/vbdev_error_rpc.o 00:06:36.751 CC module/bdev/nvme/bdev_nvme.o 00:06:36.751 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:36.751 CC module/bdev/raid/bdev_raid.o 00:06:36.751 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:36.751 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:36.751 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:36.751 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:36.752 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:36.752 CC module/bdev/ftl/bdev_ftl.o 00:06:36.752 CC module/bdev/raid/bdev_raid_rpc.o 00:06:36.752 CC module/bdev/raid/bdev_raid_sb.o 00:06:36.752 CC module/bdev/raid/raid0.o 00:06:36.752 CC module/bdev/iscsi/bdev_iscsi.o 00:06:36.752 CC module/bdev/nvme/nvme_rpc.o 00:06:36.752 CC module/bdev/nvme/bdev_mdns_client.o 00:06:36.752 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:36.752 CC module/bdev/nvme/vbdev_opal.o 00:06:36.752 CC module/bdev/raid/raid1.o 00:06:36.752 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:36.752 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:36.752 CC module/bdev/raid/concat.o 00:06:36.752 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:36.752 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:36.752 CC module/bdev/aio/bdev_aio_rpc.o 00:06:36.752 CC module/bdev/aio/bdev_aio.o 00:06:37.048 LIB libspdk_blobfs_bdev.a 00:06:37.048 SO libspdk_blobfs_bdev.so.6.0 00:06:37.048 LIB libspdk_bdev_split.a 00:06:37.048 LIB libspdk_bdev_null.a 00:06:37.048 LIB libspdk_bdev_gpt.a 00:06:37.048 SO libspdk_bdev_split.so.6.0 00:06:37.048 LIB libspdk_bdev_passthru.a 00:06:37.048 SO libspdk_bdev_null.so.6.0 00:06:37.048 LIB libspdk_bdev_error.a 00:06:37.048 SO libspdk_bdev_gpt.so.6.0 00:06:37.048 SYMLINK libspdk_blobfs_bdev.so 00:06:37.048 LIB libspdk_bdev_zone_block.a 00:06:37.048 SO libspdk_bdev_passthru.so.6.0 00:06:37.048 LIB libspdk_bdev_delay.a 00:06:37.048 LIB libspdk_bdev_ftl.a 00:06:37.307 SO libspdk_bdev_error.so.6.0 00:06:37.307 SO libspdk_bdev_zone_block.so.6.0 00:06:37.307 SYMLINK libspdk_bdev_split.so 00:06:37.307 SO libspdk_bdev_delay.so.6.0 00:06:37.307 SO libspdk_bdev_ftl.so.6.0 00:06:37.307 SYMLINK libspdk_bdev_null.so 00:06:37.307 LIB libspdk_bdev_malloc.a 00:06:37.307 SYMLINK libspdk_bdev_gpt.so 00:06:37.307 LIB libspdk_bdev_aio.a 00:06:37.307 SYMLINK libspdk_bdev_passthru.so 00:06:37.307 LIB libspdk_bdev_iscsi.a 00:06:37.307 SYMLINK libspdk_bdev_error.so 00:06:37.307 SYMLINK libspdk_bdev_zone_block.so 00:06:37.307 SO libspdk_bdev_malloc.so.6.0 00:06:37.307 SO libspdk_bdev_aio.so.6.0 00:06:37.307 SO libspdk_bdev_iscsi.so.6.0 00:06:37.307 SYMLINK libspdk_bdev_delay.so 00:06:37.307 SYMLINK libspdk_bdev_ftl.so 00:06:37.307 SYMLINK libspdk_bdev_malloc.so 00:06:37.307 LIB libspdk_bdev_lvol.a 00:06:37.307 SYMLINK libspdk_bdev_aio.so 00:06:37.307 SYMLINK libspdk_bdev_iscsi.so 00:06:37.307 SO libspdk_bdev_lvol.so.6.0 00:06:37.307 LIB libspdk_bdev_virtio.a 00:06:37.307 SO libspdk_bdev_virtio.so.6.0 00:06:37.307 SYMLINK libspdk_bdev_lvol.so 00:06:37.566 SYMLINK libspdk_bdev_virtio.so 00:06:37.566 LIB libspdk_bdev_raid.a 00:06:37.825 SO libspdk_bdev_raid.so.6.0 00:06:37.825 SYMLINK libspdk_bdev_raid.so 00:06:38.394 LIB libspdk_bdev_nvme.a 00:06:38.653 SO libspdk_bdev_nvme.so.7.0 00:06:38.653 SYMLINK libspdk_bdev_nvme.so 00:06:39.223 CC module/event/subsystems/iobuf/iobuf.o 00:06:39.223 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:39.223 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:39.223 CC module/event/subsystems/vmd/vmd.o 00:06:39.223 CC module/event/subsystems/scheduler/scheduler.o 00:06:39.223 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:39.223 CC module/event/subsystems/sock/sock.o 00:06:39.223 CC module/event/subsystems/keyring/keyring.o 00:06:39.223 CC module/event/subsystems/fsdev/fsdev.o 00:06:39.223 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:39.483 LIB libspdk_event_vhost_blk.a 00:06:39.483 LIB libspdk_event_sock.a 00:06:39.483 LIB libspdk_event_vmd.a 00:06:39.483 LIB libspdk_event_keyring.a 00:06:39.483 LIB libspdk_event_iobuf.a 00:06:39.483 LIB libspdk_event_scheduler.a 00:06:39.483 LIB libspdk_event_fsdev.a 00:06:39.483 LIB libspdk_event_vfu_tgt.a 00:06:39.483 SO libspdk_event_sock.so.5.0 00:06:39.483 SO libspdk_event_vmd.so.6.0 00:06:39.483 SO libspdk_event_vhost_blk.so.3.0 00:06:39.483 SO libspdk_event_keyring.so.1.0 00:06:39.483 SO libspdk_event_iobuf.so.3.0 00:06:39.483 SO libspdk_event_scheduler.so.4.0 00:06:39.483 SO libspdk_event_vfu_tgt.so.3.0 00:06:39.483 SO libspdk_event_fsdev.so.1.0 00:06:39.483 SYMLINK libspdk_event_sock.so 00:06:39.483 SYMLINK libspdk_event_vhost_blk.so 00:06:39.483 SYMLINK libspdk_event_vmd.so 00:06:39.483 SYMLINK libspdk_event_keyring.so 00:06:39.483 SYMLINK libspdk_event_vfu_tgt.so 00:06:39.483 SYMLINK libspdk_event_scheduler.so 00:06:39.483 SYMLINK libspdk_event_iobuf.so 00:06:39.483 SYMLINK libspdk_event_fsdev.so 00:06:39.742 CC module/event/subsystems/accel/accel.o 00:06:40.002 LIB libspdk_event_accel.a 00:06:40.002 SO libspdk_event_accel.so.6.0 00:06:40.002 SYMLINK libspdk_event_accel.so 00:06:40.261 CC module/event/subsystems/bdev/bdev.o 00:06:40.520 LIB libspdk_event_bdev.a 00:06:40.520 SO libspdk_event_bdev.so.6.0 00:06:40.520 SYMLINK libspdk_event_bdev.so 00:06:41.088 CC module/event/subsystems/scsi/scsi.o 00:06:41.088 CC module/event/subsystems/nbd/nbd.o 00:06:41.088 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:41.088 CC module/event/subsystems/ublk/ublk.o 00:06:41.088 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:41.088 LIB libspdk_event_ublk.a 00:06:41.088 LIB libspdk_event_nbd.a 00:06:41.088 LIB libspdk_event_scsi.a 00:06:41.088 SO libspdk_event_ublk.so.3.0 00:06:41.088 SO libspdk_event_nbd.so.6.0 00:06:41.088 SO libspdk_event_scsi.so.6.0 00:06:41.088 LIB libspdk_event_nvmf.a 00:06:41.088 SYMLINK libspdk_event_ublk.so 00:06:41.088 SYMLINK libspdk_event_nbd.so 00:06:41.088 SO libspdk_event_nvmf.so.6.0 00:06:41.088 SYMLINK libspdk_event_scsi.so 00:06:41.348 SYMLINK libspdk_event_nvmf.so 00:06:41.608 CC module/event/subsystems/iscsi/iscsi.o 00:06:41.608 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:41.608 LIB libspdk_event_vhost_scsi.a 00:06:41.608 LIB libspdk_event_iscsi.a 00:06:41.608 SO libspdk_event_vhost_scsi.so.3.0 00:06:41.608 SO libspdk_event_iscsi.so.6.0 00:06:41.867 SYMLINK libspdk_event_iscsi.so 00:06:41.867 SYMLINK libspdk_event_vhost_scsi.so 00:06:41.867 SO libspdk.so.6.0 00:06:41.867 SYMLINK libspdk.so 00:06:42.126 CC app/spdk_lspci/spdk_lspci.o 00:06:42.126 CXX app/trace/trace.o 00:06:42.126 CC app/spdk_nvme_identify/identify.o 00:06:42.126 CC app/trace_record/trace_record.o 00:06:42.387 CC test/rpc_client/rpc_client_test.o 00:06:42.387 CC app/spdk_nvme_perf/perf.o 00:06:42.387 CC app/spdk_top/spdk_top.o 00:06:42.387 TEST_HEADER include/spdk/accel.h 00:06:42.387 TEST_HEADER include/spdk/accel_module.h 00:06:42.387 CC app/spdk_nvme_discover/discovery_aer.o 00:06:42.387 TEST_HEADER include/spdk/assert.h 00:06:42.387 TEST_HEADER include/spdk/barrier.h 00:06:42.387 TEST_HEADER include/spdk/base64.h 00:06:42.387 TEST_HEADER include/spdk/bdev.h 00:06:42.387 TEST_HEADER include/spdk/bdev_zone.h 00:06:42.387 TEST_HEADER include/spdk/bit_array.h 00:06:42.387 TEST_HEADER include/spdk/bdev_module.h 00:06:42.387 TEST_HEADER include/spdk/bit_pool.h 00:06:42.387 TEST_HEADER include/spdk/blob_bdev.h 00:06:42.387 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:42.387 TEST_HEADER include/spdk/blobfs.h 00:06:42.387 TEST_HEADER include/spdk/blob.h 00:06:42.387 TEST_HEADER include/spdk/conf.h 00:06:42.387 TEST_HEADER include/spdk/config.h 00:06:42.387 TEST_HEADER include/spdk/cpuset.h 00:06:42.387 TEST_HEADER include/spdk/crc16.h 00:06:42.387 TEST_HEADER include/spdk/crc32.h 00:06:42.387 TEST_HEADER include/spdk/crc64.h 00:06:42.387 TEST_HEADER include/spdk/dif.h 00:06:42.387 TEST_HEADER include/spdk/endian.h 00:06:42.387 TEST_HEADER include/spdk/dma.h 00:06:42.387 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:42.387 TEST_HEADER include/spdk/env.h 00:06:42.387 TEST_HEADER include/spdk/env_dpdk.h 00:06:42.387 TEST_HEADER include/spdk/event.h 00:06:42.387 TEST_HEADER include/spdk/fd_group.h 00:06:42.387 TEST_HEADER include/spdk/fd.h 00:06:42.387 TEST_HEADER include/spdk/file.h 00:06:42.387 TEST_HEADER include/spdk/fsdev.h 00:06:42.387 TEST_HEADER include/spdk/fsdev_module.h 00:06:42.387 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:42.387 TEST_HEADER include/spdk/ftl.h 00:06:42.387 CC app/iscsi_tgt/iscsi_tgt.o 00:06:42.387 TEST_HEADER include/spdk/gpt_spec.h 00:06:42.387 TEST_HEADER include/spdk/hexlify.h 00:06:42.387 TEST_HEADER include/spdk/idxd.h 00:06:42.387 TEST_HEADER include/spdk/idxd_spec.h 00:06:42.387 TEST_HEADER include/spdk/histogram_data.h 00:06:42.387 TEST_HEADER include/spdk/init.h 00:06:42.387 TEST_HEADER include/spdk/ioat.h 00:06:42.387 TEST_HEADER include/spdk/ioat_spec.h 00:06:42.387 TEST_HEADER include/spdk/iscsi_spec.h 00:06:42.387 TEST_HEADER include/spdk/jsonrpc.h 00:06:42.387 TEST_HEADER include/spdk/keyring.h 00:06:42.387 TEST_HEADER include/spdk/json.h 00:06:42.387 CC app/spdk_dd/spdk_dd.o 00:06:42.387 TEST_HEADER include/spdk/keyring_module.h 00:06:42.387 TEST_HEADER include/spdk/likely.h 00:06:42.387 TEST_HEADER include/spdk/log.h 00:06:42.387 TEST_HEADER include/spdk/md5.h 00:06:42.387 TEST_HEADER include/spdk/lvol.h 00:06:42.387 TEST_HEADER include/spdk/mmio.h 00:06:42.387 TEST_HEADER include/spdk/memory.h 00:06:42.387 TEST_HEADER include/spdk/nbd.h 00:06:42.387 TEST_HEADER include/spdk/net.h 00:06:42.387 TEST_HEADER include/spdk/notify.h 00:06:42.387 TEST_HEADER include/spdk/nvme.h 00:06:42.387 TEST_HEADER include/spdk/nvme_intel.h 00:06:42.387 CC app/nvmf_tgt/nvmf_main.o 00:06:42.387 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:42.387 TEST_HEADER include/spdk/nvme_spec.h 00:06:42.387 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:42.387 TEST_HEADER include/spdk/nvme_zns.h 00:06:42.387 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:42.387 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:42.387 TEST_HEADER include/spdk/nvmf.h 00:06:42.387 TEST_HEADER include/spdk/nvmf_spec.h 00:06:42.387 TEST_HEADER include/spdk/nvmf_transport.h 00:06:42.387 TEST_HEADER include/spdk/opal_spec.h 00:06:42.387 TEST_HEADER include/spdk/opal.h 00:06:42.387 TEST_HEADER include/spdk/pci_ids.h 00:06:42.387 TEST_HEADER include/spdk/pipe.h 00:06:42.387 TEST_HEADER include/spdk/queue.h 00:06:42.387 TEST_HEADER include/spdk/reduce.h 00:06:42.387 TEST_HEADER include/spdk/rpc.h 00:06:42.387 TEST_HEADER include/spdk/scheduler.h 00:06:42.387 TEST_HEADER include/spdk/scsi.h 00:06:42.387 TEST_HEADER include/spdk/scsi_spec.h 00:06:42.387 TEST_HEADER include/spdk/sock.h 00:06:42.387 TEST_HEADER include/spdk/thread.h 00:06:42.387 TEST_HEADER include/spdk/string.h 00:06:42.387 TEST_HEADER include/spdk/trace.h 00:06:42.387 TEST_HEADER include/spdk/trace_parser.h 00:06:42.387 TEST_HEADER include/spdk/stdinc.h 00:06:42.387 TEST_HEADER include/spdk/ublk.h 00:06:42.387 TEST_HEADER include/spdk/tree.h 00:06:42.387 TEST_HEADER include/spdk/util.h 00:06:42.387 TEST_HEADER include/spdk/version.h 00:06:42.387 TEST_HEADER include/spdk/uuid.h 00:06:42.387 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:42.387 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:42.387 CC app/spdk_tgt/spdk_tgt.o 00:06:42.387 TEST_HEADER include/spdk/vhost.h 00:06:42.387 TEST_HEADER include/spdk/vmd.h 00:06:42.387 TEST_HEADER include/spdk/xor.h 00:06:42.387 CXX test/cpp_headers/accel.o 00:06:42.387 TEST_HEADER include/spdk/zipf.h 00:06:42.387 CXX test/cpp_headers/accel_module.o 00:06:42.387 CXX test/cpp_headers/assert.o 00:06:42.387 CXX test/cpp_headers/barrier.o 00:06:42.387 CXX test/cpp_headers/bdev.o 00:06:42.387 CXX test/cpp_headers/bit_array.o 00:06:42.387 CXX test/cpp_headers/base64.o 00:06:42.387 CXX test/cpp_headers/bdev_zone.o 00:06:42.387 CXX test/cpp_headers/bdev_module.o 00:06:42.387 CXX test/cpp_headers/bit_pool.o 00:06:42.387 CXX test/cpp_headers/blob_bdev.o 00:06:42.387 CXX test/cpp_headers/blobfs_bdev.o 00:06:42.387 CXX test/cpp_headers/blobfs.o 00:06:42.387 CXX test/cpp_headers/conf.o 00:06:42.387 CXX test/cpp_headers/config.o 00:06:42.387 CXX test/cpp_headers/cpuset.o 00:06:42.387 CXX test/cpp_headers/blob.o 00:06:42.387 CXX test/cpp_headers/crc16.o 00:06:42.387 CXX test/cpp_headers/crc32.o 00:06:42.387 CXX test/cpp_headers/dif.o 00:06:42.388 CXX test/cpp_headers/crc64.o 00:06:42.388 CXX test/cpp_headers/dma.o 00:06:42.388 CXX test/cpp_headers/event.o 00:06:42.388 CXX test/cpp_headers/endian.o 00:06:42.388 CXX test/cpp_headers/env_dpdk.o 00:06:42.388 CXX test/cpp_headers/env.o 00:06:42.388 CXX test/cpp_headers/fd_group.o 00:06:42.388 CXX test/cpp_headers/file.o 00:06:42.388 CXX test/cpp_headers/fd.o 00:06:42.388 CXX test/cpp_headers/fsdev.o 00:06:42.388 CXX test/cpp_headers/fsdev_module.o 00:06:42.388 CXX test/cpp_headers/fuse_dispatcher.o 00:06:42.388 CXX test/cpp_headers/ftl.o 00:06:42.388 CXX test/cpp_headers/hexlify.o 00:06:42.388 CXX test/cpp_headers/gpt_spec.o 00:06:42.388 CXX test/cpp_headers/idxd.o 00:06:42.388 CXX test/cpp_headers/histogram_data.o 00:06:42.388 CXX test/cpp_headers/init.o 00:06:42.388 CXX test/cpp_headers/idxd_spec.o 00:06:42.388 CXX test/cpp_headers/iscsi_spec.o 00:06:42.388 CXX test/cpp_headers/json.o 00:06:42.388 CXX test/cpp_headers/ioat.o 00:06:42.388 CXX test/cpp_headers/ioat_spec.o 00:06:42.388 CXX test/cpp_headers/keyring_module.o 00:06:42.388 CXX test/cpp_headers/keyring.o 00:06:42.388 CC examples/ioat/perf/perf.o 00:06:42.388 CXX test/cpp_headers/jsonrpc.o 00:06:42.388 CXX test/cpp_headers/likely.o 00:06:42.388 CXX test/cpp_headers/log.o 00:06:42.388 CXX test/cpp_headers/md5.o 00:06:42.388 CXX test/cpp_headers/lvol.o 00:06:42.388 CXX test/cpp_headers/memory.o 00:06:42.388 CXX test/cpp_headers/nbd.o 00:06:42.388 CXX test/cpp_headers/net.o 00:06:42.388 CXX test/cpp_headers/mmio.o 00:06:42.388 CC app/fio/nvme/fio_plugin.o 00:06:42.388 CXX test/cpp_headers/nvme.o 00:06:42.388 CXX test/cpp_headers/notify.o 00:06:42.388 CC examples/ioat/verify/verify.o 00:06:42.388 CXX test/cpp_headers/nvme_intel.o 00:06:42.388 CXX test/cpp_headers/nvme_ocssd.o 00:06:42.388 CC examples/util/zipf/zipf.o 00:06:42.388 CXX test/cpp_headers/nvme_spec.o 00:06:42.388 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:42.388 CXX test/cpp_headers/nvme_zns.o 00:06:42.388 CXX test/cpp_headers/nvmf_cmd.o 00:06:42.388 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:42.388 CXX test/cpp_headers/nvmf.o 00:06:42.388 CXX test/cpp_headers/nvmf_spec.o 00:06:42.388 CXX test/cpp_headers/nvmf_transport.o 00:06:42.388 CC test/env/vtophys/vtophys.o 00:06:42.388 CC test/env/pci/pci_ut.o 00:06:42.388 CXX test/cpp_headers/opal.o 00:06:42.388 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:42.388 CC test/app/histogram_perf/histogram_perf.o 00:06:42.388 CC test/app/jsoncat/jsoncat.o 00:06:42.388 CC test/app/stub/stub.o 00:06:42.388 CC test/thread/poller_perf/poller_perf.o 00:06:42.388 LINK spdk_lspci 00:06:42.388 CC test/env/memory/memory_ut.o 00:06:42.388 CC test/dma/test_dma/test_dma.o 00:06:42.663 CC test/app/bdev_svc/bdev_svc.o 00:06:42.663 CC app/fio/bdev/fio_plugin.o 00:06:42.663 LINK spdk_nvme_discover 00:06:42.931 LINK iscsi_tgt 00:06:42.931 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:42.931 LINK rpc_client_test 00:06:42.931 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:42.931 CC test/env/mem_callbacks/mem_callbacks.o 00:06:42.931 LINK interrupt_tgt 00:06:42.931 LINK jsoncat 00:06:42.931 LINK zipf 00:06:42.931 LINK vtophys 00:06:42.931 LINK histogram_perf 00:06:42.931 LINK env_dpdk_post_init 00:06:42.931 LINK poller_perf 00:06:42.931 CXX test/cpp_headers/opal_spec.o 00:06:42.931 LINK nvmf_tgt 00:06:42.931 CXX test/cpp_headers/pci_ids.o 00:06:42.931 CXX test/cpp_headers/pipe.o 00:06:42.931 LINK stub 00:06:42.931 CXX test/cpp_headers/queue.o 00:06:42.931 CXX test/cpp_headers/reduce.o 00:06:42.931 LINK spdk_trace_record 00:06:42.931 CXX test/cpp_headers/rpc.o 00:06:42.931 CXX test/cpp_headers/scheduler.o 00:06:42.931 CXX test/cpp_headers/scsi.o 00:06:42.931 CXX test/cpp_headers/scsi_spec.o 00:06:42.931 CXX test/cpp_headers/sock.o 00:06:42.931 CXX test/cpp_headers/stdinc.o 00:06:42.931 CXX test/cpp_headers/string.o 00:06:42.931 CXX test/cpp_headers/thread.o 00:06:42.931 CXX test/cpp_headers/trace.o 00:06:42.931 CXX test/cpp_headers/trace_parser.o 00:06:42.931 CXX test/cpp_headers/tree.o 00:06:42.931 CXX test/cpp_headers/util.o 00:06:42.931 CXX test/cpp_headers/ublk.o 00:06:43.191 CXX test/cpp_headers/uuid.o 00:06:43.191 CXX test/cpp_headers/version.o 00:06:43.191 CXX test/cpp_headers/vfio_user_pci.o 00:06:43.191 CXX test/cpp_headers/vfio_user_spec.o 00:06:43.191 CXX test/cpp_headers/vhost.o 00:06:43.191 CXX test/cpp_headers/vmd.o 00:06:43.191 CXX test/cpp_headers/xor.o 00:06:43.191 CXX test/cpp_headers/zipf.o 00:06:43.191 LINK spdk_tgt 00:06:43.191 LINK ioat_perf 00:06:43.191 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:43.191 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:43.191 LINK verify 00:06:43.191 LINK bdev_svc 00:06:43.191 LINK pci_ut 00:06:43.191 LINK spdk_dd 00:06:43.191 LINK spdk_trace 00:06:43.450 CC examples/vmd/led/led.o 00:06:43.450 CC examples/sock/hello_world/hello_sock.o 00:06:43.450 CC examples/idxd/perf/perf.o 00:06:43.450 CC examples/vmd/lsvmd/lsvmd.o 00:06:43.450 LINK spdk_nvme 00:06:43.450 CC examples/thread/thread/thread_ex.o 00:06:43.450 LINK nvme_fuzz 00:06:43.450 CC test/event/reactor_perf/reactor_perf.o 00:06:43.450 LINK spdk_nvme_identify 00:06:43.450 LINK test_dma 00:06:43.450 CC test/event/event_perf/event_perf.o 00:06:43.450 LINK spdk_bdev 00:06:43.450 CC test/event/reactor/reactor.o 00:06:43.450 CC test/event/app_repeat/app_repeat.o 00:06:43.450 CC test/event/scheduler/scheduler.o 00:06:43.450 LINK spdk_nvme_perf 00:06:43.709 LINK led 00:06:43.709 LINK lsvmd 00:06:43.709 LINK vhost_fuzz 00:06:43.709 LINK reactor_perf 00:06:43.709 LINK mem_callbacks 00:06:43.709 LINK event_perf 00:06:43.709 LINK hello_sock 00:06:43.709 LINK reactor 00:06:43.709 CC app/vhost/vhost.o 00:06:43.709 LINK spdk_top 00:06:43.709 LINK app_repeat 00:06:43.709 LINK thread 00:06:43.709 LINK idxd_perf 00:06:43.709 LINK scheduler 00:06:43.970 LINK vhost 00:06:43.970 LINK memory_ut 00:06:43.970 CC test/nvme/startup/startup.o 00:06:43.970 CC test/nvme/aer/aer.o 00:06:43.970 CC test/nvme/fused_ordering/fused_ordering.o 00:06:43.970 CC test/nvme/connect_stress/connect_stress.o 00:06:43.970 CC test/nvme/boot_partition/boot_partition.o 00:06:43.970 CC test/nvme/cuse/cuse.o 00:06:43.970 CC test/nvme/overhead/overhead.o 00:06:43.970 CC test/nvme/e2edp/nvme_dp.o 00:06:43.970 CC test/nvme/reset/reset.o 00:06:43.970 CC test/nvme/simple_copy/simple_copy.o 00:06:43.970 CC test/nvme/compliance/nvme_compliance.o 00:06:43.970 CC test/nvme/reserve/reserve.o 00:06:43.970 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:43.970 CC test/nvme/sgl/sgl.o 00:06:43.970 CC test/nvme/fdp/fdp.o 00:06:43.970 CC test/nvme/err_injection/err_injection.o 00:06:43.970 CC test/blobfs/mkfs/mkfs.o 00:06:43.970 CC test/accel/dif/dif.o 00:06:43.970 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:43.970 CC examples/nvme/hotplug/hotplug.o 00:06:43.970 CC examples/nvme/reconnect/reconnect.o 00:06:44.228 CC examples/nvme/hello_world/hello_world.o 00:06:44.229 CC examples/nvme/arbitration/arbitration.o 00:06:44.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:44.229 CC examples/nvme/abort/abort.o 00:06:44.229 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:44.229 CC test/lvol/esnap/esnap.o 00:06:44.229 LINK startup 00:06:44.229 CC examples/accel/perf/accel_perf.o 00:06:44.229 LINK boot_partition 00:06:44.229 LINK err_injection 00:06:44.229 LINK doorbell_aers 00:06:44.229 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:44.229 LINK connect_stress 00:06:44.229 LINK reserve 00:06:44.229 CC examples/blob/cli/blobcli.o 00:06:44.229 CC examples/blob/hello_world/hello_blob.o 00:06:44.229 LINK fused_ordering 00:06:44.229 LINK mkfs 00:06:44.229 LINK nvme_dp 00:06:44.229 LINK simple_copy 00:06:44.229 LINK sgl 00:06:44.229 LINK reset 00:06:44.229 LINK aer 00:06:44.229 LINK cmb_copy 00:06:44.229 LINK pmr_persistence 00:06:44.229 LINK overhead 00:06:44.229 LINK fdp 00:06:44.229 LINK hello_world 00:06:44.229 LINK nvme_compliance 00:06:44.229 LINK hotplug 00:06:44.487 LINK arbitration 00:06:44.487 LINK reconnect 00:06:44.487 LINK abort 00:06:44.487 LINK iscsi_fuzz 00:06:44.487 LINK hello_fsdev 00:06:44.487 LINK hello_blob 00:06:44.488 LINK nvme_manage 00:06:44.488 LINK accel_perf 00:06:44.746 LINK dif 00:06:44.746 LINK blobcli 00:06:45.006 LINK cuse 00:06:45.006 CC examples/bdev/hello_world/hello_bdev.o 00:06:45.006 CC examples/bdev/bdevperf/bdevperf.o 00:06:45.265 CC test/bdev/bdevio/bdevio.o 00:06:45.265 LINK hello_bdev 00:06:45.524 LINK bdevio 00:06:45.785 LINK bdevperf 00:06:46.044 CC examples/nvmf/nvmf/nvmf.o 00:06:46.304 LINK nvmf 00:06:47.683 LINK esnap 00:06:47.943 00:06:47.943 real 0m55.127s 00:06:47.943 user 8m15.747s 00:06:47.943 sys 3m37.727s 00:06:47.943 17:23:46 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:47.943 17:23:46 make -- common/autotest_common.sh@10 -- $ set +x 00:06:47.943 ************************************ 00:06:47.943 END TEST make 00:06:47.943 ************************************ 00:06:47.943 17:23:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:47.943 17:23:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:47.943 17:23:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:47.943 17:23:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.943 17:23:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:47.943 17:23:46 -- pm/common@44 -- $ pid=837782 00:06:47.943 17:23:46 -- pm/common@50 -- $ kill -TERM 837782 00:06:47.943 17:23:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.943 17:23:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:47.943 17:23:46 -- pm/common@44 -- $ pid=837784 00:06:47.943 17:23:46 -- pm/common@50 -- $ kill -TERM 837784 00:06:47.943 17:23:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.943 17:23:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:47.943 17:23:46 -- pm/common@44 -- $ pid=837786 00:06:47.943 17:23:46 -- pm/common@50 -- $ kill -TERM 837786 00:06:47.943 17:23:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:47.943 17:23:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:47.943 17:23:46 -- pm/common@44 -- $ pid=837809 00:06:47.943 17:23:46 -- pm/common@50 -- $ sudo -E kill -TERM 837809 00:06:47.943 17:23:47 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.943 17:23:47 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.943 17:23:47 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.203 17:23:47 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.203 17:23:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.203 17:23:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.203 17:23:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.203 17:23:47 -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.203 17:23:47 -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.203 17:23:47 -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.203 17:23:47 -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.203 17:23:47 -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.203 17:23:47 -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.203 17:23:47 -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.203 17:23:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.203 17:23:47 -- scripts/common.sh@344 -- # case "$op" in 00:06:48.203 17:23:47 -- scripts/common.sh@345 -- # : 1 00:06:48.203 17:23:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.203 17:23:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.203 17:23:47 -- scripts/common.sh@365 -- # decimal 1 00:06:48.203 17:23:47 -- scripts/common.sh@353 -- # local d=1 00:06:48.203 17:23:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.203 17:23:47 -- scripts/common.sh@355 -- # echo 1 00:06:48.203 17:23:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.203 17:23:47 -- scripts/common.sh@366 -- # decimal 2 00:06:48.203 17:23:47 -- scripts/common.sh@353 -- # local d=2 00:06:48.203 17:23:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.203 17:23:47 -- scripts/common.sh@355 -- # echo 2 00:06:48.203 17:23:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.203 17:23:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.203 17:23:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.203 17:23:47 -- scripts/common.sh@368 -- # return 0 00:06:48.203 17:23:47 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.203 17:23:47 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.203 --rc genhtml_branch_coverage=1 00:06:48.203 --rc genhtml_function_coverage=1 00:06:48.203 --rc genhtml_legend=1 00:06:48.203 --rc geninfo_all_blocks=1 00:06:48.203 --rc geninfo_unexecuted_blocks=1 00:06:48.203 00:06:48.203 ' 00:06:48.203 17:23:47 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.203 --rc genhtml_branch_coverage=1 00:06:48.203 --rc genhtml_function_coverage=1 00:06:48.203 --rc genhtml_legend=1 00:06:48.203 --rc geninfo_all_blocks=1 00:06:48.203 --rc geninfo_unexecuted_blocks=1 00:06:48.203 00:06:48.203 ' 00:06:48.203 17:23:47 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.203 --rc genhtml_branch_coverage=1 00:06:48.203 --rc genhtml_function_coverage=1 00:06:48.203 --rc genhtml_legend=1 00:06:48.203 --rc geninfo_all_blocks=1 00:06:48.203 --rc geninfo_unexecuted_blocks=1 00:06:48.203 00:06:48.203 ' 00:06:48.203 17:23:47 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.203 --rc genhtml_branch_coverage=1 00:06:48.203 --rc genhtml_function_coverage=1 00:06:48.203 --rc genhtml_legend=1 00:06:48.203 --rc geninfo_all_blocks=1 00:06:48.203 --rc geninfo_unexecuted_blocks=1 00:06:48.203 00:06:48.203 ' 00:06:48.203 17:23:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.203 17:23:47 -- nvmf/common.sh@7 -- # uname -s 00:06:48.203 17:23:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.203 17:23:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.203 17:23:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.203 17:23:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.203 17:23:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.203 17:23:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.203 17:23:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.203 17:23:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.203 17:23:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.204 17:23:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.204 17:23:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:48.204 17:23:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:48.204 17:23:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.204 17:23:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.204 17:23:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.204 17:23:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.204 17:23:47 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.204 17:23:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.204 17:23:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.204 17:23:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.204 17:23:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.204 17:23:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.204 17:23:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.204 17:23:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.204 17:23:47 -- paths/export.sh@5 -- # export PATH 00:06:48.204 17:23:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.204 17:23:47 -- nvmf/common.sh@51 -- # : 0 00:06:48.204 17:23:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.204 17:23:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.204 17:23:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.204 17:23:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.204 17:23:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.204 17:23:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.204 17:23:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.204 17:23:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.204 17:23:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.204 17:23:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:48.204 17:23:47 -- spdk/autotest.sh@32 -- # uname -s 00:06:48.204 17:23:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:48.204 17:23:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:48.204 17:23:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:48.204 17:23:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:48.204 17:23:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:48.204 17:23:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:48.204 17:23:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:48.204 17:23:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:48.204 17:23:47 -- spdk/autotest.sh@48 -- # udevadm_pid=900009 00:06:48.204 17:23:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:48.204 17:23:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:48.204 17:23:47 -- pm/common@17 -- # local monitor 00:06:48.204 17:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.204 17:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.204 17:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.204 17:23:47 -- pm/common@21 -- # date +%s 00:06:48.204 17:23:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:48.204 17:23:47 -- pm/common@21 -- # date +%s 00:06:48.204 17:23:47 -- pm/common@25 -- # sleep 1 00:06:48.204 17:23:47 -- pm/common@21 -- # date +%s 00:06:48.204 17:23:47 -- pm/common@21 -- # date +%s 00:06:48.204 17:23:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919427 00:06:48.204 17:23:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919427 00:06:48.204 17:23:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919427 00:06:48.204 17:23:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728919427 00:06:48.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919427_collect-cpu-load.pm.log 00:06:48.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919427_collect-vmstat.pm.log 00:06:48.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919427_collect-cpu-temp.pm.log 00:06:48.204 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728919427_collect-bmc-pm.bmc.pm.log 00:06:49.141 17:23:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:49.141 17:23:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:49.141 17:23:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:49.141 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 17:23:48 -- spdk/autotest.sh@59 -- # create_test_list 00:06:49.141 17:23:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:49.141 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 17:23:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:49.141 17:23:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.141 17:23:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.141 17:23:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:49.141 17:23:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:49.141 17:23:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:49.141 17:23:48 -- common/autotest_common.sh@1455 -- # uname 00:06:49.141 17:23:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:49.141 17:23:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:49.141 17:23:48 -- common/autotest_common.sh@1475 -- # uname 00:06:49.141 17:23:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:49.141 17:23:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:49.142 17:23:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:49.399 lcov: LCOV version 1.15 00:06:49.400 17:23:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:01.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:01.610 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:16.533 17:24:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:16.533 17:24:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.533 17:24:13 -- common/autotest_common.sh@10 -- # set +x 00:07:16.533 17:24:13 -- spdk/autotest.sh@78 -- # rm -f 00:07:16.533 17:24:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:17.102 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:07:17.102 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:07:17.102 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:07:17.361 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:07:17.361 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:07:17.361 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:07:17.361 17:24:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:17.361 17:24:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:17.361 17:24:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:17.361 17:24:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:17.361 17:24:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.361 17:24:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:17.361 17:24:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:17.361 17:24:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:17.361 17:24:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.361 17:24:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:17.361 17:24:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:17.361 17:24:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:17.361 17:24:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:17.361 17:24:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:17.361 17:24:16 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:17.361 No valid GPT data, bailing 00:07:17.361 17:24:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:17.361 17:24:16 -- scripts/common.sh@394 -- # pt= 00:07:17.361 17:24:16 -- scripts/common.sh@395 -- # return 1 00:07:17.361 17:24:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:17.361 1+0 records in 00:07:17.361 1+0 records out 00:07:17.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00178699 s, 587 MB/s 00:07:17.361 17:24:16 -- spdk/autotest.sh@105 -- # sync 00:07:17.361 17:24:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:17.361 17:24:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:17.361 17:24:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:23.934 17:24:21 -- spdk/autotest.sh@111 -- # uname -s 00:07:23.934 17:24:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:23.934 17:24:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:23.934 17:24:21 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:25.841 Hugepages 00:07:25.841 node hugesize free / total 00:07:25.841 node0 1048576kB 0 / 0 00:07:25.841 node0 2048kB 0 / 0 00:07:25.841 node1 1048576kB 0 / 0 00:07:25.841 node1 2048kB 0 / 0 00:07:25.841 00:07:25.841 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:25.841 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:25.841 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:25.841 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:25.841 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:25.841 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:25.841 17:24:24 -- spdk/autotest.sh@117 -- # uname -s 00:07:25.841 17:24:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:25.841 17:24:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:25.841 17:24:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:29.132 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:29.132 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:30.511 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:30.511 17:24:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:31.449 17:24:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:31.449 17:24:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:31.449 17:24:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:31.449 17:24:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:31.449 17:24:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:31.449 17:24:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:31.449 17:24:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:31.449 17:24:30 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:31.449 17:24:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:31.449 17:24:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:31.449 17:24:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:07:31.449 17:24:30 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:34.742 Waiting for block devices as requested 00:07:34.742 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:07:34.742 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:34.742 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:35.001 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:35.001 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:35.001 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:35.260 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:35.260 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:35.260 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:35.520 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:35.520 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:35.520 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:35.520 17:24:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:35.780 17:24:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:07:35.780 17:24:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:07:35.780 17:24:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:35.780 17:24:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:35.780 17:24:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:35.780 17:24:34 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:07:35.780 17:24:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:35.780 17:24:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:35.780 17:24:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:35.780 17:24:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:35.780 17:24:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:35.780 17:24:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:35.780 17:24:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:35.780 17:24:34 -- common/autotest_common.sh@1541 -- # continue 00:07:35.780 17:24:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:35.780 17:24:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.780 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 17:24:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:35.780 17:24:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.780 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.780 17:24:34 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:39.072 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:39.072 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:40.010 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:07:40.010 17:24:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:40.010 17:24:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:40.010 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.269 17:24:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:40.269 17:24:39 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:40.269 17:24:39 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:40.269 17:24:39 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:40.269 17:24:39 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:40.269 17:24:39 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:40.269 17:24:39 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:40.269 17:24:39 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:40.269 17:24:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:40.269 17:24:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:40.269 17:24:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.269 17:24:39 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:40.269 17:24:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:40.269 17:24:39 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:40.269 17:24:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:07:40.269 17:24:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:40.269 17:24:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:07:40.269 17:24:39 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:07:40.269 17:24:39 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:40.269 17:24:39 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:07:40.269 17:24:39 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:07:40.269 17:24:39 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:07:40.269 17:24:39 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:07:40.269 17:24:39 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=914759 00:07:40.269 17:24:39 -- common/autotest_common.sh@1583 -- # waitforlisten 914759 00:07:40.269 17:24:39 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:40.269 17:24:39 -- common/autotest_common.sh@831 -- # '[' -z 914759 ']' 00:07:40.269 17:24:39 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.269 17:24:39 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.269 17:24:39 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.269 17:24:39 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.269 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:07:40.269 [2024-10-14 17:24:39.326714] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:07:40.269 [2024-10-14 17:24:39.326767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914759 ] 00:07:40.269 [2024-10-14 17:24:39.396437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.528 [2024-10-14 17:24:39.438559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.528 17:24:39 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.528 17:24:39 -- common/autotest_common.sh@864 -- # return 0 00:07:40.528 17:24:39 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:40.528 17:24:39 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:40.528 17:24:39 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:07:43.917 nvme0n1 00:07:43.917 17:24:42 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:43.917 [2024-10-14 17:24:42.827471] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:43.917 request: 00:07:43.917 { 00:07:43.917 "nvme_ctrlr_name": "nvme0", 00:07:43.917 "password": "test", 00:07:43.917 "method": "bdev_nvme_opal_revert", 00:07:43.917 "req_id": 1 00:07:43.917 } 00:07:43.917 Got JSON-RPC error response 00:07:43.917 response: 00:07:43.917 { 00:07:43.917 "code": -32602, 00:07:43.917 "message": "Invalid parameters" 00:07:43.917 } 00:07:43.917 17:24:42 -- common/autotest_common.sh@1589 -- # true 00:07:43.917 17:24:42 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:43.917 17:24:42 -- common/autotest_common.sh@1593 -- # killprocess 914759 00:07:43.917 17:24:42 -- common/autotest_common.sh@950 -- # '[' -z 914759 ']' 00:07:43.917 17:24:42 -- common/autotest_common.sh@954 -- # kill -0 914759 00:07:43.917 17:24:42 -- common/autotest_common.sh@955 -- # uname 00:07:43.917 17:24:42 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.917 17:24:42 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 914759 00:07:43.917 17:24:42 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.917 17:24:42 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.917 17:24:42 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 914759' 00:07:43.917 killing process with pid 914759 00:07:43.917 17:24:42 -- common/autotest_common.sh@969 -- # kill 914759 00:07:43.917 17:24:42 -- common/autotest_common.sh@974 -- # wait 914759 00:07:46.452 17:24:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:46.452 17:24:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:46.452 17:24:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.452 17:24:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.452 17:24:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:46.452 17:24:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.452 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.452 17:24:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:46.452 17:24:44 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:46.452 17:24:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.452 17:24:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.452 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.452 ************************************ 00:07:46.452 START TEST env 00:07:46.452 ************************************ 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:46.452 * Looking for test storage... 00:07:46.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.452 17:24:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.452 17:24:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.452 17:24:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.452 17:24:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.452 17:24:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.452 17:24:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.452 17:24:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.452 17:24:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.452 17:24:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.452 17:24:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.452 17:24:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.452 17:24:45 env -- scripts/common.sh@344 -- # case "$op" in 00:07:46.452 17:24:45 env -- scripts/common.sh@345 -- # : 1 00:07:46.452 17:24:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.452 17:24:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.452 17:24:45 env -- scripts/common.sh@365 -- # decimal 1 00:07:46.452 17:24:45 env -- scripts/common.sh@353 -- # local d=1 00:07:46.452 17:24:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.452 17:24:45 env -- scripts/common.sh@355 -- # echo 1 00:07:46.452 17:24:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.452 17:24:45 env -- scripts/common.sh@366 -- # decimal 2 00:07:46.452 17:24:45 env -- scripts/common.sh@353 -- # local d=2 00:07:46.452 17:24:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.452 17:24:45 env -- scripts/common.sh@355 -- # echo 2 00:07:46.452 17:24:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.452 17:24:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.452 17:24:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.452 17:24:45 env -- scripts/common.sh@368 -- # return 0 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.452 17:24:45 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.452 --rc genhtml_branch_coverage=1 00:07:46.452 --rc genhtml_function_coverage=1 00:07:46.452 --rc genhtml_legend=1 00:07:46.452 --rc geninfo_all_blocks=1 00:07:46.452 --rc geninfo_unexecuted_blocks=1 00:07:46.452 00:07:46.453 ' 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.453 --rc genhtml_branch_coverage=1 00:07:46.453 --rc genhtml_function_coverage=1 00:07:46.453 --rc genhtml_legend=1 00:07:46.453 --rc geninfo_all_blocks=1 00:07:46.453 --rc geninfo_unexecuted_blocks=1 00:07:46.453 00:07:46.453 ' 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.453 --rc genhtml_branch_coverage=1 00:07:46.453 --rc genhtml_function_coverage=1 00:07:46.453 --rc genhtml_legend=1 00:07:46.453 --rc geninfo_all_blocks=1 00:07:46.453 --rc geninfo_unexecuted_blocks=1 00:07:46.453 00:07:46.453 ' 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.453 --rc genhtml_branch_coverage=1 00:07:46.453 --rc genhtml_function_coverage=1 00:07:46.453 --rc genhtml_legend=1 00:07:46.453 --rc geninfo_all_blocks=1 00:07:46.453 --rc geninfo_unexecuted_blocks=1 00:07:46.453 00:07:46.453 ' 00:07:46.453 17:24:45 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.453 17:24:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.453 ************************************ 00:07:46.453 START TEST env_memory 00:07:46.453 ************************************ 00:07:46.453 17:24:45 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:46.453 00:07:46.453 00:07:46.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.453 http://cunit.sourceforge.net/ 00:07:46.453 00:07:46.453 00:07:46.453 Suite: memory 00:07:46.453 Test: alloc and free memory map ...[2024-10-14 17:24:45.275548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:46.453 passed 00:07:46.453 Test: mem map translation ...[2024-10-14 17:24:45.293078] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:46.453 [2024-10-14 17:24:45.293092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:46.453 [2024-10-14 17:24:45.293126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:46.453 [2024-10-14 17:24:45.293132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:46.453 passed 00:07:46.453 Test: mem map registration ...[2024-10-14 17:24:45.328676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:46.453 [2024-10-14 17:24:45.328698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:46.453 passed 00:07:46.453 Test: mem map adjacent registrations ...passed 00:07:46.453 00:07:46.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.453 suites 1 1 n/a 0 0 00:07:46.453 tests 4 4 4 0 0 00:07:46.453 asserts 152 152 152 0 n/a 00:07:46.453 00:07:46.453 Elapsed time = 0.133 seconds 00:07:46.453 00:07:46.453 real 0m0.145s 00:07:46.453 user 0m0.139s 00:07:46.453 sys 0m0.006s 00:07:46.453 17:24:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.453 17:24:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:46.453 ************************************ 00:07:46.453 END TEST env_memory 00:07:46.453 ************************************ 00:07:46.453 17:24:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.453 17:24:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.453 17:24:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.453 ************************************ 00:07:46.453 START TEST env_vtophys 00:07:46.453 ************************************ 00:07:46.453 17:24:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:46.453 EAL: lib.eal log level changed from notice to debug 00:07:46.453 EAL: Detected lcore 0 as core 0 on socket 0 00:07:46.453 EAL: Detected lcore 1 as core 1 on socket 0 00:07:46.453 EAL: Detected lcore 2 as core 2 on socket 0 00:07:46.453 EAL: Detected lcore 3 as core 3 on socket 0 00:07:46.453 EAL: Detected lcore 4 as core 4 on socket 0 00:07:46.453 EAL: Detected lcore 5 as core 5 on socket 0 00:07:46.453 EAL: Detected lcore 6 as core 6 on socket 0 00:07:46.453 EAL: Detected lcore 7 as core 8 on socket 0 00:07:46.453 EAL: Detected lcore 8 as core 9 on socket 0 00:07:46.453 EAL: Detected lcore 9 as core 10 on socket 0 00:07:46.453 EAL: Detected lcore 10 as core 11 on socket 0 00:07:46.453 EAL: Detected lcore 11 as core 12 on socket 0 00:07:46.453 EAL: Detected lcore 12 as core 13 on socket 0 00:07:46.453 EAL: Detected lcore 13 as core 16 on socket 0 00:07:46.453 EAL: Detected lcore 14 as core 17 on socket 0 00:07:46.453 EAL: Detected lcore 15 as core 18 on socket 0 00:07:46.453 EAL: Detected lcore 16 as core 19 on socket 0 00:07:46.453 EAL: Detected lcore 17 as core 20 on socket 0 00:07:46.453 EAL: Detected lcore 18 as core 21 on socket 0 00:07:46.453 EAL: Detected lcore 19 as core 25 on socket 0 00:07:46.453 EAL: Detected lcore 20 as core 26 on socket 0 00:07:46.453 EAL: Detected lcore 21 as core 27 on socket 0 00:07:46.453 EAL: Detected lcore 22 as core 28 on socket 0 00:07:46.453 EAL: Detected lcore 23 as core 29 on socket 0 00:07:46.453 EAL: Detected lcore 24 as core 0 on socket 1 00:07:46.453 EAL: Detected lcore 25 as core 1 on socket 1 00:07:46.453 EAL: Detected lcore 26 as core 2 on socket 1 00:07:46.453 EAL: Detected lcore 27 as core 3 on socket 1 00:07:46.453 EAL: Detected lcore 28 as core 4 on socket 1 00:07:46.453 EAL: Detected lcore 29 as core 5 on socket 1 00:07:46.453 EAL: Detected lcore 30 as core 6 on socket 1 00:07:46.453 EAL: Detected lcore 31 as core 8 on socket 1 00:07:46.453 EAL: Detected lcore 32 as core 10 on socket 1 00:07:46.453 EAL: Detected lcore 33 as core 11 on socket 1 00:07:46.453 EAL: Detected lcore 34 as core 12 on socket 1 00:07:46.453 EAL: Detected lcore 35 as core 13 on socket 1 00:07:46.453 EAL: Detected lcore 36 as core 16 on socket 1 00:07:46.453 EAL: Detected lcore 37 as core 17 on socket 1 00:07:46.453 EAL: Detected lcore 38 as core 18 on socket 1 00:07:46.453 EAL: Detected lcore 39 as core 19 on socket 1 00:07:46.453 EAL: Detected lcore 40 as core 20 on socket 1 00:07:46.453 EAL: Detected lcore 41 as core 21 on socket 1 00:07:46.453 EAL: Detected lcore 42 as core 24 on socket 1 00:07:46.453 EAL: Detected lcore 43 as core 25 on socket 1 00:07:46.453 EAL: Detected lcore 44 as core 26 on socket 1 00:07:46.453 EAL: Detected lcore 45 as core 27 on socket 1 00:07:46.453 EAL: Detected lcore 46 as core 28 on socket 1 00:07:46.453 EAL: Detected lcore 47 as core 29 on socket 1 00:07:46.453 EAL: Detected lcore 48 as core 0 on socket 0 00:07:46.453 EAL: Detected lcore 49 as core 1 on socket 0 00:07:46.453 EAL: Detected lcore 50 as core 2 on socket 0 00:07:46.453 EAL: Detected lcore 51 as core 3 on socket 0 00:07:46.453 EAL: Detected lcore 52 as core 4 on socket 0 00:07:46.453 EAL: Detected lcore 53 as core 5 on socket 0 00:07:46.453 EAL: Detected lcore 54 as core 6 on socket 0 00:07:46.453 EAL: Detected lcore 55 as core 8 on socket 0 00:07:46.453 EAL: Detected lcore 56 as core 9 on socket 0 00:07:46.453 EAL: Detected lcore 57 as core 10 on socket 0 00:07:46.453 EAL: Detected lcore 58 as core 11 on socket 0 00:07:46.453 EAL: Detected lcore 59 as core 12 on socket 0 00:07:46.453 EAL: Detected lcore 60 as core 13 on socket 0 00:07:46.453 EAL: Detected lcore 61 as core 16 on socket 0 00:07:46.453 EAL: Detected lcore 62 as core 17 on socket 0 00:07:46.453 EAL: Detected lcore 63 as core 18 on socket 0 00:07:46.453 EAL: Detected lcore 64 as core 19 on socket 0 00:07:46.453 EAL: Detected lcore 65 as core 20 on socket 0 00:07:46.453 EAL: Detected lcore 66 as core 21 on socket 0 00:07:46.453 EAL: Detected lcore 67 as core 25 on socket 0 00:07:46.453 EAL: Detected lcore 68 as core 26 on socket 0 00:07:46.453 EAL: Detected lcore 69 as core 27 on socket 0 00:07:46.453 EAL: Detected lcore 70 as core 28 on socket 0 00:07:46.453 EAL: Detected lcore 71 as core 29 on socket 0 00:07:46.453 EAL: Detected lcore 72 as core 0 on socket 1 00:07:46.453 EAL: Detected lcore 73 as core 1 on socket 1 00:07:46.453 EAL: Detected lcore 74 as core 2 on socket 1 00:07:46.453 EAL: Detected lcore 75 as core 3 on socket 1 00:07:46.453 EAL: Detected lcore 76 as core 4 on socket 1 00:07:46.453 EAL: Detected lcore 77 as core 5 on socket 1 00:07:46.453 EAL: Detected lcore 78 as core 6 on socket 1 00:07:46.453 EAL: Detected lcore 79 as core 8 on socket 1 00:07:46.453 EAL: Detected lcore 80 as core 10 on socket 1 00:07:46.453 EAL: Detected lcore 81 as core 11 on socket 1 00:07:46.453 EAL: Detected lcore 82 as core 12 on socket 1 00:07:46.453 EAL: Detected lcore 83 as core 13 on socket 1 00:07:46.453 EAL: Detected lcore 84 as core 16 on socket 1 00:07:46.453 EAL: Detected lcore 85 as core 17 on socket 1 00:07:46.453 EAL: Detected lcore 86 as core 18 on socket 1 00:07:46.453 EAL: Detected lcore 87 as core 19 on socket 1 00:07:46.453 EAL: Detected lcore 88 as core 20 on socket 1 00:07:46.453 EAL: Detected lcore 89 as core 21 on socket 1 00:07:46.453 EAL: Detected lcore 90 as core 24 on socket 1 00:07:46.453 EAL: Detected lcore 91 as core 25 on socket 1 00:07:46.453 EAL: Detected lcore 92 as core 26 on socket 1 00:07:46.453 EAL: Detected lcore 93 as core 27 on socket 1 00:07:46.453 EAL: Detected lcore 94 as core 28 on socket 1 00:07:46.453 EAL: Detected lcore 95 as core 29 on socket 1 00:07:46.453 EAL: Maximum logical cores by configuration: 128 00:07:46.453 EAL: Detected CPU lcores: 96 00:07:46.453 EAL: Detected NUMA nodes: 2 00:07:46.453 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:46.453 EAL: Detected shared linkage of DPDK 00:07:46.453 EAL: No shared files mode enabled, IPC will be disabled 00:07:46.453 EAL: Bus pci wants IOVA as 'DC' 00:07:46.453 EAL: Buses did not request a specific IOVA mode. 00:07:46.453 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:46.453 EAL: Selected IOVA mode 'VA' 00:07:46.453 EAL: Probing VFIO support... 00:07:46.453 EAL: IOMMU type 1 (Type 1) is supported 00:07:46.453 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:46.453 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:46.453 EAL: VFIO support initialized 00:07:46.453 EAL: Ask a virtual area of 0x2e000 bytes 00:07:46.453 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:46.453 EAL: Setting up physically contiguous memory... 00:07:46.453 EAL: Setting maximum number of open files to 524288 00:07:46.453 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:46.454 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:46.454 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:46.454 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:46.454 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.454 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:46.454 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:46.454 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.454 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:46.454 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:46.454 EAL: Hugepages will be freed exactly as allocated. 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: TSC frequency is ~2100000 KHz 00:07:46.454 EAL: Main lcore 0 is ready (tid=7f01aeb5ca00;cpuset=[0]) 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 0 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 2MB 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:46.454 EAL: Mem event callback 'spdk:(nil)' registered 00:07:46.454 00:07:46.454 00:07:46.454 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.454 http://cunit.sourceforge.net/ 00:07:46.454 00:07:46.454 00:07:46.454 Suite: components_suite 00:07:46.454 Test: vtophys_malloc_test ...passed 00:07:46.454 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 4MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 4MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 6MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 6MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 10MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 10MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 18MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 18MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 34MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 34MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.454 EAL: Restoring previous memory policy: 4 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was expanded by 66MB 00:07:46.454 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.454 EAL: request: mp_malloc_sync 00:07:46.454 EAL: No shared files mode enabled, IPC is disabled 00:07:46.454 EAL: Heap on socket 0 was shrunk by 66MB 00:07:46.454 EAL: Trying to obtain current memory policy. 00:07:46.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.714 EAL: Restoring previous memory policy: 4 00:07:46.714 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.714 EAL: request: mp_malloc_sync 00:07:46.714 EAL: No shared files mode enabled, IPC is disabled 00:07:46.714 EAL: Heap on socket 0 was expanded by 130MB 00:07:46.714 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.714 EAL: request: mp_malloc_sync 00:07:46.714 EAL: No shared files mode enabled, IPC is disabled 00:07:46.714 EAL: Heap on socket 0 was shrunk by 130MB 00:07:46.714 EAL: Trying to obtain current memory policy. 00:07:46.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.714 EAL: Restoring previous memory policy: 4 00:07:46.714 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.714 EAL: request: mp_malloc_sync 00:07:46.714 EAL: No shared files mode enabled, IPC is disabled 00:07:46.714 EAL: Heap on socket 0 was expanded by 258MB 00:07:46.714 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.714 EAL: request: mp_malloc_sync 00:07:46.714 EAL: No shared files mode enabled, IPC is disabled 00:07:46.714 EAL: Heap on socket 0 was shrunk by 258MB 00:07:46.714 EAL: Trying to obtain current memory policy. 00:07:46.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.973 EAL: Restoring previous memory policy: 4 00:07:46.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.973 EAL: request: mp_malloc_sync 00:07:46.973 EAL: No shared files mode enabled, IPC is disabled 00:07:46.973 EAL: Heap on socket 0 was expanded by 514MB 00:07:46.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.973 EAL: request: mp_malloc_sync 00:07:46.973 EAL: No shared files mode enabled, IPC is disabled 00:07:46.973 EAL: Heap on socket 0 was shrunk by 514MB 00:07:46.973 EAL: Trying to obtain current memory policy. 00:07:46.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.232 EAL: Restoring previous memory policy: 4 00:07:47.232 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.232 EAL: request: mp_malloc_sync 00:07:47.232 EAL: No shared files mode enabled, IPC is disabled 00:07:47.232 EAL: Heap on socket 0 was expanded by 1026MB 00:07:47.491 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.491 EAL: request: mp_malloc_sync 00:07:47.491 EAL: No shared files mode enabled, IPC is disabled 00:07:47.491 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:47.491 passed 00:07:47.491 00:07:47.491 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.491 suites 1 1 n/a 0 0 00:07:47.491 tests 2 2 2 0 0 00:07:47.491 asserts 497 497 497 0 n/a 00:07:47.491 00:07:47.491 Elapsed time = 0.974 seconds 00:07:47.491 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.491 EAL: request: mp_malloc_sync 00:07:47.491 EAL: No shared files mode enabled, IPC is disabled 00:07:47.491 EAL: Heap on socket 0 was shrunk by 2MB 00:07:47.491 EAL: No shared files mode enabled, IPC is disabled 00:07:47.491 EAL: No shared files mode enabled, IPC is disabled 00:07:47.491 EAL: No shared files mode enabled, IPC is disabled 00:07:47.491 00:07:47.491 real 0m1.112s 00:07:47.491 user 0m0.642s 00:07:47.492 sys 0m0.434s 00:07:47.492 17:24:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.492 17:24:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 ************************************ 00:07:47.492 END TEST env_vtophys 00:07:47.492 ************************************ 00:07:47.492 17:24:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:47.492 17:24:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.492 17:24:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.492 17:24:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:47.492 ************************************ 00:07:47.492 START TEST env_pci 00:07:47.492 ************************************ 00:07:47.492 17:24:46 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:47.751 00:07:47.751 00:07:47.751 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.751 http://cunit.sourceforge.net/ 00:07:47.751 00:07:47.751 00:07:47.751 Suite: pci 00:07:47.751 Test: pci_hook ...[2024-10-14 17:24:46.646349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 916080 has claimed it 00:07:47.751 EAL: Cannot find device (10000:00:01.0) 00:07:47.751 EAL: Failed to attach device on primary process 00:07:47.751 passed 00:07:47.751 00:07:47.751 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.751 suites 1 1 n/a 0 0 00:07:47.751 tests 1 1 1 0 0 00:07:47.751 asserts 25 25 25 0 n/a 00:07:47.751 00:07:47.751 Elapsed time = 0.028 seconds 00:07:47.751 00:07:47.751 real 0m0.049s 00:07:47.751 user 0m0.018s 00:07:47.751 sys 0m0.031s 00:07:47.751 17:24:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.751 17:24:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:47.751 ************************************ 00:07:47.751 END TEST env_pci 00:07:47.751 ************************************ 00:07:47.751 17:24:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:47.751 17:24:46 env -- env/env.sh@15 -- # uname 00:07:47.751 17:24:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:47.751 17:24:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:47.751 17:24:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:47.751 17:24:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:47.751 17:24:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.751 17:24:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:47.751 ************************************ 00:07:47.751 START TEST env_dpdk_post_init 00:07:47.751 ************************************ 00:07:47.751 17:24:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:47.751 EAL: Detected CPU lcores: 96 00:07:47.751 EAL: Detected NUMA nodes: 2 00:07:47.751 EAL: Detected shared linkage of DPDK 00:07:47.751 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:47.751 EAL: Selected IOVA mode 'VA' 00:07:47.751 EAL: VFIO support initialized 00:07:47.751 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:47.751 EAL: Using IOMMU type 1 (Type 1) 00:07:47.751 EAL: Ignore mapping IO port bar(1) 00:07:47.751 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:48.011 EAL: Ignore mapping IO port bar(1) 00:07:48.011 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:48.579 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:48.839 EAL: Ignore mapping IO port bar(1) 00:07:48.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:53.032 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:07:53.032 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:07:53.032 Starting DPDK initialization... 00:07:53.032 Starting SPDK post initialization... 00:07:53.032 SPDK NVMe probe 00:07:53.032 Attaching to 0000:5e:00.0 00:07:53.032 Attached to 0000:5e:00.0 00:07:53.032 Cleaning up... 00:07:53.032 00:07:53.032 real 0m4.940s 00:07:53.032 user 0m3.510s 00:07:53.032 sys 0m0.502s 00:07:53.032 17:24:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.032 17:24:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 ************************************ 00:07:53.032 END TEST env_dpdk_post_init 00:07:53.032 ************************************ 00:07:53.032 17:24:51 env -- env/env.sh@26 -- # uname 00:07:53.032 17:24:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:53.032 17:24:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:53.032 17:24:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.032 17:24:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.032 17:24:51 env -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 ************************************ 00:07:53.032 START TEST env_mem_callbacks 00:07:53.032 ************************************ 00:07:53.032 17:24:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:53.032 EAL: Detected CPU lcores: 96 00:07:53.032 EAL: Detected NUMA nodes: 2 00:07:53.032 EAL: Detected shared linkage of DPDK 00:07:53.032 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:53.032 EAL: Selected IOVA mode 'VA' 00:07:53.032 EAL: VFIO support initialized 00:07:53.032 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:53.032 00:07:53.032 00:07:53.032 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.032 http://cunit.sourceforge.net/ 00:07:53.032 00:07:53.032 00:07:53.032 Suite: memory 00:07:53.032 Test: test ... 00:07:53.032 register 0x200000200000 2097152 00:07:53.032 malloc 3145728 00:07:53.032 register 0x200000400000 4194304 00:07:53.032 buf 0x200000500000 len 3145728 PASSED 00:07:53.032 malloc 64 00:07:53.032 buf 0x2000004fff40 len 64 PASSED 00:07:53.032 malloc 4194304 00:07:53.032 register 0x200000800000 6291456 00:07:53.032 buf 0x200000a00000 len 4194304 PASSED 00:07:53.032 free 0x200000500000 3145728 00:07:53.032 free 0x2000004fff40 64 00:07:53.032 unregister 0x200000400000 4194304 PASSED 00:07:53.032 free 0x200000a00000 4194304 00:07:53.032 unregister 0x200000800000 6291456 PASSED 00:07:53.032 malloc 8388608 00:07:53.032 register 0x200000400000 10485760 00:07:53.032 buf 0x200000600000 len 8388608 PASSED 00:07:53.032 free 0x200000600000 8388608 00:07:53.032 unregister 0x200000400000 10485760 PASSED 00:07:53.032 passed 00:07:53.032 00:07:53.032 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.032 suites 1 1 n/a 0 0 00:07:53.032 tests 1 1 1 0 0 00:07:53.032 asserts 15 15 15 0 n/a 00:07:53.032 00:07:53.032 Elapsed time = 0.008 seconds 00:07:53.032 00:07:53.032 real 0m0.058s 00:07:53.032 user 0m0.023s 00:07:53.032 sys 0m0.034s 00:07:53.032 17:24:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.032 17:24:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 ************************************ 00:07:53.032 END TEST env_mem_callbacks 00:07:53.032 ************************************ 00:07:53.032 00:07:53.032 real 0m6.834s 00:07:53.032 user 0m4.596s 00:07:53.032 sys 0m1.309s 00:07:53.032 17:24:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.032 17:24:51 env -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 ************************************ 00:07:53.032 END TEST env 00:07:53.032 ************************************ 00:07:53.032 17:24:51 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:53.032 17:24:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.032 17:24:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.032 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 ************************************ 00:07:53.032 START TEST rpc 00:07:53.032 ************************************ 00:07:53.032 17:24:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:53.032 * Looking for test storage... 00:07:53.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.032 17:24:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.032 17:24:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.032 17:24:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.032 17:24:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.032 17:24:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.032 17:24:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:53.032 17:24:52 rpc -- scripts/common.sh@345 -- # : 1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.032 17:24:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.032 17:24:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@353 -- # local d=1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.032 17:24:52 rpc -- scripts/common.sh@355 -- # echo 1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.032 17:24:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@353 -- # local d=2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.032 17:24:52 rpc -- scripts/common.sh@355 -- # echo 2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.032 17:24:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.032 17:24:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.032 17:24:52 rpc -- scripts/common.sh@368 -- # return 0 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.032 --rc genhtml_branch_coverage=1 00:07:53.032 --rc genhtml_function_coverage=1 00:07:53.032 --rc genhtml_legend=1 00:07:53.032 --rc geninfo_all_blocks=1 00:07:53.032 --rc geninfo_unexecuted_blocks=1 00:07:53.032 00:07:53.032 ' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.032 --rc genhtml_branch_coverage=1 00:07:53.032 --rc genhtml_function_coverage=1 00:07:53.032 --rc genhtml_legend=1 00:07:53.032 --rc geninfo_all_blocks=1 00:07:53.032 --rc geninfo_unexecuted_blocks=1 00:07:53.032 00:07:53.032 ' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.032 --rc genhtml_branch_coverage=1 00:07:53.032 --rc genhtml_function_coverage=1 00:07:53.032 --rc genhtml_legend=1 00:07:53.032 --rc geninfo_all_blocks=1 00:07:53.032 --rc geninfo_unexecuted_blocks=1 00:07:53.032 00:07:53.032 ' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.032 --rc genhtml_branch_coverage=1 00:07:53.032 --rc genhtml_function_coverage=1 00:07:53.032 --rc genhtml_legend=1 00:07:53.032 --rc geninfo_all_blocks=1 00:07:53.032 --rc geninfo_unexecuted_blocks=1 00:07:53.032 00:07:53.032 ' 00:07:53.032 17:24:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=917124 00:07:53.032 17:24:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.032 17:24:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:53.032 17:24:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 917124 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@831 -- # '[' -z 917124 ']' 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.032 17:24:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.032 [2024-10-14 17:24:52.161806] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:07:53.032 [2024-10-14 17:24:52.161850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917124 ] 00:07:53.291 [2024-10-14 17:24:52.230742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.291 [2024-10-14 17:24:52.273457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:53.291 [2024-10-14 17:24:52.273489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 917124' to capture a snapshot of events at runtime. 00:07:53.291 [2024-10-14 17:24:52.273496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.292 [2024-10-14 17:24:52.273502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.292 [2024-10-14 17:24:52.273506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid917124 for offline analysis/debug. 00:07:53.292 [2024-10-14 17:24:52.274042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.551 17:24:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.551 17:24:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:53.551 17:24:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:53.551 17:24:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:53.551 17:24:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:53.551 17:24:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:53.551 17:24:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.551 17:24:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.551 17:24:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 ************************************ 00:07:53.551 START TEST rpc_integrity 00:07:53.551 ************************************ 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:53.551 { 00:07:53.551 "name": "Malloc0", 00:07:53.551 "aliases": [ 00:07:53.551 "a4eb1e2a-5291-4876-b52b-e05f4b1f28ff" 00:07:53.551 ], 00:07:53.551 "product_name": "Malloc disk", 00:07:53.551 "block_size": 512, 00:07:53.551 "num_blocks": 16384, 00:07:53.551 "uuid": "a4eb1e2a-5291-4876-b52b-e05f4b1f28ff", 00:07:53.551 "assigned_rate_limits": { 00:07:53.551 "rw_ios_per_sec": 0, 00:07:53.551 "rw_mbytes_per_sec": 0, 00:07:53.551 "r_mbytes_per_sec": 0, 00:07:53.551 "w_mbytes_per_sec": 0 00:07:53.551 }, 00:07:53.551 "claimed": false, 00:07:53.551 "zoned": false, 00:07:53.551 "supported_io_types": { 00:07:53.551 "read": true, 00:07:53.551 "write": true, 00:07:53.551 "unmap": true, 00:07:53.551 "flush": true, 00:07:53.551 "reset": true, 00:07:53.551 "nvme_admin": false, 00:07:53.551 "nvme_io": false, 00:07:53.551 "nvme_io_md": false, 00:07:53.551 "write_zeroes": true, 00:07:53.551 "zcopy": true, 00:07:53.551 "get_zone_info": false, 00:07:53.551 "zone_management": false, 00:07:53.551 "zone_append": false, 00:07:53.551 "compare": false, 00:07:53.551 "compare_and_write": false, 00:07:53.551 "abort": true, 00:07:53.551 "seek_hole": false, 00:07:53.551 "seek_data": false, 00:07:53.551 "copy": true, 00:07:53.551 "nvme_iov_md": false 00:07:53.551 }, 00:07:53.551 "memory_domains": [ 00:07:53.551 { 00:07:53.551 "dma_device_id": "system", 00:07:53.551 "dma_device_type": 1 00:07:53.551 }, 00:07:53.551 { 00:07:53.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.551 "dma_device_type": 2 00:07:53.551 } 00:07:53.551 ], 00:07:53.551 "driver_specific": {} 00:07:53.551 } 00:07:53.551 ]' 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 [2024-10-14 17:24:52.639891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:53.551 [2024-10-14 17:24:52.639918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.551 [2024-10-14 17:24:52.639929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2483790 00:07:53.551 [2024-10-14 17:24:52.639935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.551 [2024-10-14 17:24:52.641010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.551 [2024-10-14 17:24:52.641030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:53.551 Passthru0 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.551 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.551 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:53.551 { 00:07:53.551 "name": "Malloc0", 00:07:53.551 "aliases": [ 00:07:53.551 "a4eb1e2a-5291-4876-b52b-e05f4b1f28ff" 00:07:53.551 ], 00:07:53.551 "product_name": "Malloc disk", 00:07:53.551 "block_size": 512, 00:07:53.552 "num_blocks": 16384, 00:07:53.552 "uuid": "a4eb1e2a-5291-4876-b52b-e05f4b1f28ff", 00:07:53.552 "assigned_rate_limits": { 00:07:53.552 "rw_ios_per_sec": 0, 00:07:53.552 "rw_mbytes_per_sec": 0, 00:07:53.552 "r_mbytes_per_sec": 0, 00:07:53.552 "w_mbytes_per_sec": 0 00:07:53.552 }, 00:07:53.552 "claimed": true, 00:07:53.552 "claim_type": "exclusive_write", 00:07:53.552 "zoned": false, 00:07:53.552 "supported_io_types": { 00:07:53.552 "read": true, 00:07:53.552 "write": true, 00:07:53.552 "unmap": true, 00:07:53.552 "flush": true, 00:07:53.552 "reset": true, 00:07:53.552 "nvme_admin": false, 00:07:53.552 "nvme_io": false, 00:07:53.552 "nvme_io_md": false, 00:07:53.552 "write_zeroes": true, 00:07:53.552 "zcopy": true, 00:07:53.552 "get_zone_info": false, 00:07:53.552 "zone_management": false, 00:07:53.552 "zone_append": false, 00:07:53.552 "compare": false, 00:07:53.552 "compare_and_write": false, 00:07:53.552 "abort": true, 00:07:53.552 "seek_hole": false, 00:07:53.552 "seek_data": false, 00:07:53.552 "copy": true, 00:07:53.552 "nvme_iov_md": false 00:07:53.552 }, 00:07:53.552 "memory_domains": [ 00:07:53.552 { 00:07:53.552 "dma_device_id": "system", 00:07:53.552 "dma_device_type": 1 00:07:53.552 }, 00:07:53.552 { 00:07:53.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.552 "dma_device_type": 2 00:07:53.552 } 00:07:53.552 ], 00:07:53.552 "driver_specific": {} 00:07:53.552 }, 00:07:53.552 { 00:07:53.552 "name": "Passthru0", 00:07:53.552 "aliases": [ 00:07:53.552 "72284811-2c13-5ad6-ac6d-853043a1b8c6" 00:07:53.552 ], 00:07:53.552 "product_name": "passthru", 00:07:53.552 "block_size": 512, 00:07:53.552 "num_blocks": 16384, 00:07:53.552 "uuid": "72284811-2c13-5ad6-ac6d-853043a1b8c6", 00:07:53.552 "assigned_rate_limits": { 00:07:53.552 "rw_ios_per_sec": 0, 00:07:53.552 "rw_mbytes_per_sec": 0, 00:07:53.552 "r_mbytes_per_sec": 0, 00:07:53.552 "w_mbytes_per_sec": 0 00:07:53.552 }, 00:07:53.552 "claimed": false, 00:07:53.552 "zoned": false, 00:07:53.552 "supported_io_types": { 00:07:53.552 "read": true, 00:07:53.552 "write": true, 00:07:53.552 "unmap": true, 00:07:53.552 "flush": true, 00:07:53.552 "reset": true, 00:07:53.552 "nvme_admin": false, 00:07:53.552 "nvme_io": false, 00:07:53.552 "nvme_io_md": false, 00:07:53.552 "write_zeroes": true, 00:07:53.552 "zcopy": true, 00:07:53.552 "get_zone_info": false, 00:07:53.552 "zone_management": false, 00:07:53.552 "zone_append": false, 00:07:53.552 "compare": false, 00:07:53.552 "compare_and_write": false, 00:07:53.552 "abort": true, 00:07:53.552 "seek_hole": false, 00:07:53.552 "seek_data": false, 00:07:53.552 "copy": true, 00:07:53.552 "nvme_iov_md": false 00:07:53.552 }, 00:07:53.552 "memory_domains": [ 00:07:53.552 { 00:07:53.552 "dma_device_id": "system", 00:07:53.552 "dma_device_type": 1 00:07:53.552 }, 00:07:53.552 { 00:07:53.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.552 "dma_device_type": 2 00:07:53.552 } 00:07:53.552 ], 00:07:53.552 "driver_specific": { 00:07:53.552 "passthru": { 00:07:53.552 "name": "Passthru0", 00:07:53.552 "base_bdev_name": "Malloc0" 00:07:53.552 } 00:07:53.552 } 00:07:53.552 } 00:07:53.552 ]' 00:07:53.552 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:53.811 17:24:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:53.811 00:07:53.811 real 0m0.265s 00:07:53.811 user 0m0.164s 00:07:53.811 sys 0m0.034s 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 ************************************ 00:07:53.811 END TEST rpc_integrity 00:07:53.811 ************************************ 00:07:53.811 17:24:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:53.811 17:24:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.811 17:24:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.811 17:24:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 ************************************ 00:07:53.811 START TEST rpc_plugins 00:07:53.811 ************************************ 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:53.811 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.811 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:53.811 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:53.811 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.811 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:53.811 { 00:07:53.811 "name": "Malloc1", 00:07:53.811 "aliases": [ 00:07:53.811 "b43c08b7-70c5-429f-b2ff-f8f2a13494c0" 00:07:53.811 ], 00:07:53.811 "product_name": "Malloc disk", 00:07:53.811 "block_size": 4096, 00:07:53.811 "num_blocks": 256, 00:07:53.811 "uuid": "b43c08b7-70c5-429f-b2ff-f8f2a13494c0", 00:07:53.811 "assigned_rate_limits": { 00:07:53.811 "rw_ios_per_sec": 0, 00:07:53.811 "rw_mbytes_per_sec": 0, 00:07:53.811 "r_mbytes_per_sec": 0, 00:07:53.811 "w_mbytes_per_sec": 0 00:07:53.811 }, 00:07:53.811 "claimed": false, 00:07:53.811 "zoned": false, 00:07:53.811 "supported_io_types": { 00:07:53.811 "read": true, 00:07:53.811 "write": true, 00:07:53.811 "unmap": true, 00:07:53.811 "flush": true, 00:07:53.812 "reset": true, 00:07:53.812 "nvme_admin": false, 00:07:53.812 "nvme_io": false, 00:07:53.812 "nvme_io_md": false, 00:07:53.812 "write_zeroes": true, 00:07:53.812 "zcopy": true, 00:07:53.812 "get_zone_info": false, 00:07:53.812 "zone_management": false, 00:07:53.812 "zone_append": false, 00:07:53.812 "compare": false, 00:07:53.812 "compare_and_write": false, 00:07:53.812 "abort": true, 00:07:53.812 "seek_hole": false, 00:07:53.812 "seek_data": false, 00:07:53.812 "copy": true, 00:07:53.812 "nvme_iov_md": false 00:07:53.812 }, 00:07:53.812 "memory_domains": [ 00:07:53.812 { 00:07:53.812 "dma_device_id": "system", 00:07:53.812 "dma_device_type": 1 00:07:53.812 }, 00:07:53.812 { 00:07:53.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.812 "dma_device_type": 2 00:07:53.812 } 00:07:53.812 ], 00:07:53.812 "driver_specific": {} 00:07:53.812 } 00:07:53.812 ]' 00:07:53.812 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:53.812 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:53.812 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.812 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:53.812 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.812 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:54.071 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:54.071 17:24:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:54.071 00:07:54.071 real 0m0.144s 00:07:54.071 user 0m0.088s 00:07:54.071 sys 0m0.019s 00:07:54.071 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.071 17:24:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:54.071 ************************************ 00:07:54.071 END TEST rpc_plugins 00:07:54.071 ************************************ 00:07:54.071 17:24:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:54.071 17:24:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.071 17:24:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.071 17:24:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.071 ************************************ 00:07:54.071 START TEST rpc_trace_cmd_test 00:07:54.071 ************************************ 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:54.071 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid917124", 00:07:54.071 "tpoint_group_mask": "0x8", 00:07:54.071 "iscsi_conn": { 00:07:54.071 "mask": "0x2", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "scsi": { 00:07:54.071 "mask": "0x4", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "bdev": { 00:07:54.071 "mask": "0x8", 00:07:54.071 "tpoint_mask": "0xffffffffffffffff" 00:07:54.071 }, 00:07:54.071 "nvmf_rdma": { 00:07:54.071 "mask": "0x10", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "nvmf_tcp": { 00:07:54.071 "mask": "0x20", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "ftl": { 00:07:54.071 "mask": "0x40", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "blobfs": { 00:07:54.071 "mask": "0x80", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "dsa": { 00:07:54.071 "mask": "0x200", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "thread": { 00:07:54.071 "mask": "0x400", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "nvme_pcie": { 00:07:54.071 "mask": "0x800", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "iaa": { 00:07:54.071 "mask": "0x1000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "nvme_tcp": { 00:07:54.071 "mask": "0x2000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "bdev_nvme": { 00:07:54.071 "mask": "0x4000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "sock": { 00:07:54.071 "mask": "0x8000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "blob": { 00:07:54.071 "mask": "0x10000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "bdev_raid": { 00:07:54.071 "mask": "0x20000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 }, 00:07:54.071 "scheduler": { 00:07:54.071 "mask": "0x40000", 00:07:54.071 "tpoint_mask": "0x0" 00:07:54.071 } 00:07:54.071 }' 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:54.071 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:54.331 00:07:54.331 real 0m0.233s 00:07:54.331 user 0m0.196s 00:07:54.331 sys 0m0.028s 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.331 17:24:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 ************************************ 00:07:54.331 END TEST rpc_trace_cmd_test 00:07:54.331 ************************************ 00:07:54.331 17:24:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:54.331 17:24:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:54.331 17:24:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:54.331 17:24:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.331 17:24:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.331 17:24:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 ************************************ 00:07:54.331 START TEST rpc_daemon_integrity 00:07:54.331 ************************************ 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:54.331 { 00:07:54.331 "name": "Malloc2", 00:07:54.331 "aliases": [ 00:07:54.331 "84da7f96-18f5-4950-beda-ac0a34dea266" 00:07:54.331 ], 00:07:54.331 "product_name": "Malloc disk", 00:07:54.331 "block_size": 512, 00:07:54.331 "num_blocks": 16384, 00:07:54.331 "uuid": "84da7f96-18f5-4950-beda-ac0a34dea266", 00:07:54.331 "assigned_rate_limits": { 00:07:54.331 "rw_ios_per_sec": 0, 00:07:54.331 "rw_mbytes_per_sec": 0, 00:07:54.331 "r_mbytes_per_sec": 0, 00:07:54.331 "w_mbytes_per_sec": 0 00:07:54.331 }, 00:07:54.331 "claimed": false, 00:07:54.331 "zoned": false, 00:07:54.331 "supported_io_types": { 00:07:54.331 "read": true, 00:07:54.331 "write": true, 00:07:54.331 "unmap": true, 00:07:54.331 "flush": true, 00:07:54.331 "reset": true, 00:07:54.331 "nvme_admin": false, 00:07:54.331 "nvme_io": false, 00:07:54.331 "nvme_io_md": false, 00:07:54.331 "write_zeroes": true, 00:07:54.331 "zcopy": true, 00:07:54.331 "get_zone_info": false, 00:07:54.331 "zone_management": false, 00:07:54.331 "zone_append": false, 00:07:54.331 "compare": false, 00:07:54.331 "compare_and_write": false, 00:07:54.331 "abort": true, 00:07:54.331 "seek_hole": false, 00:07:54.331 "seek_data": false, 00:07:54.331 "copy": true, 00:07:54.331 "nvme_iov_md": false 00:07:54.331 }, 00:07:54.331 "memory_domains": [ 00:07:54.331 { 00:07:54.331 "dma_device_id": "system", 00:07:54.331 "dma_device_type": 1 00:07:54.331 }, 00:07:54.331 { 00:07:54.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.331 "dma_device_type": 2 00:07:54.331 } 00:07:54.331 ], 00:07:54.331 "driver_specific": {} 00:07:54.331 } 00:07:54.331 ]' 00:07:54.331 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 [2024-10-14 17:24:53.494207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:54.591 [2024-10-14 17:24:53.494233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.591 [2024-10-14 17:24:53.494246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2484330 00:07:54.591 [2024-10-14 17:24:53.494252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.591 [2024-10-14 17:24:53.495320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.591 [2024-10-14 17:24:53.495339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:54.591 Passthru0 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:54.591 { 00:07:54.591 "name": "Malloc2", 00:07:54.591 "aliases": [ 00:07:54.591 "84da7f96-18f5-4950-beda-ac0a34dea266" 00:07:54.591 ], 00:07:54.591 "product_name": "Malloc disk", 00:07:54.591 "block_size": 512, 00:07:54.591 "num_blocks": 16384, 00:07:54.591 "uuid": "84da7f96-18f5-4950-beda-ac0a34dea266", 00:07:54.591 "assigned_rate_limits": { 00:07:54.591 "rw_ios_per_sec": 0, 00:07:54.591 "rw_mbytes_per_sec": 0, 00:07:54.591 "r_mbytes_per_sec": 0, 00:07:54.591 "w_mbytes_per_sec": 0 00:07:54.591 }, 00:07:54.591 "claimed": true, 00:07:54.591 "claim_type": "exclusive_write", 00:07:54.591 "zoned": false, 00:07:54.591 "supported_io_types": { 00:07:54.591 "read": true, 00:07:54.591 "write": true, 00:07:54.591 "unmap": true, 00:07:54.591 "flush": true, 00:07:54.591 "reset": true, 00:07:54.591 "nvme_admin": false, 00:07:54.591 "nvme_io": false, 00:07:54.591 "nvme_io_md": false, 00:07:54.591 "write_zeroes": true, 00:07:54.591 "zcopy": true, 00:07:54.591 "get_zone_info": false, 00:07:54.591 "zone_management": false, 00:07:54.591 "zone_append": false, 00:07:54.591 "compare": false, 00:07:54.591 "compare_and_write": false, 00:07:54.591 "abort": true, 00:07:54.591 "seek_hole": false, 00:07:54.591 "seek_data": false, 00:07:54.591 "copy": true, 00:07:54.591 "nvme_iov_md": false 00:07:54.591 }, 00:07:54.591 "memory_domains": [ 00:07:54.591 { 00:07:54.591 "dma_device_id": "system", 00:07:54.591 "dma_device_type": 1 00:07:54.591 }, 00:07:54.591 { 00:07:54.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.591 "dma_device_type": 2 00:07:54.591 } 00:07:54.591 ], 00:07:54.591 "driver_specific": {} 00:07:54.591 }, 00:07:54.591 { 00:07:54.591 "name": "Passthru0", 00:07:54.591 "aliases": [ 00:07:54.591 "22cfb3db-0ce6-52c1-9a74-bdebb9fcff08" 00:07:54.591 ], 00:07:54.591 "product_name": "passthru", 00:07:54.591 "block_size": 512, 00:07:54.591 "num_blocks": 16384, 00:07:54.591 "uuid": "22cfb3db-0ce6-52c1-9a74-bdebb9fcff08", 00:07:54.591 "assigned_rate_limits": { 00:07:54.591 "rw_ios_per_sec": 0, 00:07:54.591 "rw_mbytes_per_sec": 0, 00:07:54.591 "r_mbytes_per_sec": 0, 00:07:54.591 "w_mbytes_per_sec": 0 00:07:54.591 }, 00:07:54.591 "claimed": false, 00:07:54.591 "zoned": false, 00:07:54.591 "supported_io_types": { 00:07:54.591 "read": true, 00:07:54.591 "write": true, 00:07:54.591 "unmap": true, 00:07:54.591 "flush": true, 00:07:54.591 "reset": true, 00:07:54.591 "nvme_admin": false, 00:07:54.591 "nvme_io": false, 00:07:54.591 "nvme_io_md": false, 00:07:54.591 "write_zeroes": true, 00:07:54.591 "zcopy": true, 00:07:54.591 "get_zone_info": false, 00:07:54.591 "zone_management": false, 00:07:54.591 "zone_append": false, 00:07:54.591 "compare": false, 00:07:54.591 "compare_and_write": false, 00:07:54.591 "abort": true, 00:07:54.591 "seek_hole": false, 00:07:54.591 "seek_data": false, 00:07:54.591 "copy": true, 00:07:54.591 "nvme_iov_md": false 00:07:54.591 }, 00:07:54.591 "memory_domains": [ 00:07:54.591 { 00:07:54.591 "dma_device_id": "system", 00:07:54.591 "dma_device_type": 1 00:07:54.591 }, 00:07:54.591 { 00:07:54.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.591 "dma_device_type": 2 00:07:54.591 } 00:07:54.591 ], 00:07:54.591 "driver_specific": { 00:07:54.591 "passthru": { 00:07:54.591 "name": "Passthru0", 00:07:54.591 "base_bdev_name": "Malloc2" 00:07:54.591 } 00:07:54.591 } 00:07:54.591 } 00:07:54.591 ]' 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:54.591 00:07:54.591 real 0m0.273s 00:07:54.591 user 0m0.176s 00:07:54.591 sys 0m0.033s 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.591 17:24:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:54.591 ************************************ 00:07:54.591 END TEST rpc_daemon_integrity 00:07:54.591 ************************************ 00:07:54.591 17:24:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:54.591 17:24:53 rpc -- rpc/rpc.sh@84 -- # killprocess 917124 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 917124 ']' 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@954 -- # kill -0 917124 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@955 -- # uname 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917124 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917124' 00:07:54.591 killing process with pid 917124 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@969 -- # kill 917124 00:07:54.591 17:24:53 rpc -- common/autotest_common.sh@974 -- # wait 917124 00:07:55.161 00:07:55.161 real 0m2.082s 00:07:55.161 user 0m2.662s 00:07:55.161 sys 0m0.691s 00:07:55.161 17:24:54 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.161 17:24:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.161 ************************************ 00:07:55.161 END TEST rpc 00:07:55.161 ************************************ 00:07:55.161 17:24:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:55.161 17:24:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.161 17:24:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.162 17:24:54 -- common/autotest_common.sh@10 -- # set +x 00:07:55.162 ************************************ 00:07:55.162 START TEST skip_rpc 00:07:55.162 ************************************ 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:55.162 * Looking for test storage... 00:07:55.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.162 17:24:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:55.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.162 --rc genhtml_branch_coverage=1 00:07:55.162 --rc genhtml_function_coverage=1 00:07:55.162 --rc genhtml_legend=1 00:07:55.162 --rc geninfo_all_blocks=1 00:07:55.162 --rc geninfo_unexecuted_blocks=1 00:07:55.162 00:07:55.162 ' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:55.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.162 --rc genhtml_branch_coverage=1 00:07:55.162 --rc genhtml_function_coverage=1 00:07:55.162 --rc genhtml_legend=1 00:07:55.162 --rc geninfo_all_blocks=1 00:07:55.162 --rc geninfo_unexecuted_blocks=1 00:07:55.162 00:07:55.162 ' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:55.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.162 --rc genhtml_branch_coverage=1 00:07:55.162 --rc genhtml_function_coverage=1 00:07:55.162 --rc genhtml_legend=1 00:07:55.162 --rc geninfo_all_blocks=1 00:07:55.162 --rc geninfo_unexecuted_blocks=1 00:07:55.162 00:07:55.162 ' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:55.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.162 --rc genhtml_branch_coverage=1 00:07:55.162 --rc genhtml_function_coverage=1 00:07:55.162 --rc genhtml_legend=1 00:07:55.162 --rc geninfo_all_blocks=1 00:07:55.162 --rc geninfo_unexecuted_blocks=1 00:07:55.162 00:07:55.162 ' 00:07:55.162 17:24:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:55.162 17:24:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:55.162 17:24:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.162 17:24:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.162 ************************************ 00:07:55.162 START TEST skip_rpc 00:07:55.162 ************************************ 00:07:55.162 17:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:55.162 17:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=917609 00:07:55.162 17:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:55.162 17:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:55.162 17:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:55.423 [2024-10-14 17:24:54.336061] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:07:55.423 [2024-10-14 17:24:54.336100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917609 ] 00:07:55.423 [2024-10-14 17:24:54.404845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.423 [2024-10-14 17:24:54.445636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 917609 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 917609 ']' 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 917609 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917609 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917609' 00:08:00.697 killing process with pid 917609 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 917609 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 917609 00:08:00.697 00:08:00.697 real 0m5.364s 00:08:00.697 user 0m5.108s 00:08:00.697 sys 0m0.284s 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.697 17:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.697 ************************************ 00:08:00.697 END TEST skip_rpc 00:08:00.697 ************************************ 00:08:00.697 17:24:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:00.697 17:24:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.697 17:24:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.697 17:24:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.697 ************************************ 00:08:00.697 START TEST skip_rpc_with_json 00:08:00.697 ************************************ 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=918519 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 918519 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 918519 ']' 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.697 17:24:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:00.697 [2024-10-14 17:24:59.773712] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:00.697 [2024-10-14 17:24:59.773752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918519 ] 00:08:00.956 [2024-10-14 17:24:59.842457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.956 [2024-10-14 17:24:59.885484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.214 [2024-10-14 17:25:00.105683] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:01.214 request: 00:08:01.214 { 00:08:01.214 "trtype": "tcp", 00:08:01.214 "method": "nvmf_get_transports", 00:08:01.214 "req_id": 1 00:08:01.214 } 00:08:01.214 Got JSON-RPC error response 00:08:01.214 response: 00:08:01.214 { 00:08:01.214 "code": -19, 00:08:01.214 "message": "No such device" 00:08:01.214 } 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.214 [2024-10-14 17:25:00.117789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.214 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:01.214 { 00:08:01.214 "subsystems": [ 00:08:01.214 { 00:08:01.214 "subsystem": "fsdev", 00:08:01.214 "config": [ 00:08:01.214 { 00:08:01.214 "method": "fsdev_set_opts", 00:08:01.214 "params": { 00:08:01.214 "fsdev_io_pool_size": 65535, 00:08:01.214 "fsdev_io_cache_size": 256 00:08:01.214 } 00:08:01.214 } 00:08:01.214 ] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "vfio_user_target", 00:08:01.214 "config": null 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "keyring", 00:08:01.214 "config": [] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "iobuf", 00:08:01.214 "config": [ 00:08:01.214 { 00:08:01.214 "method": "iobuf_set_options", 00:08:01.214 "params": { 00:08:01.214 "small_pool_count": 8192, 00:08:01.214 "large_pool_count": 1024, 00:08:01.214 "small_bufsize": 8192, 00:08:01.214 "large_bufsize": 135168 00:08:01.214 } 00:08:01.214 } 00:08:01.214 ] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "sock", 00:08:01.214 "config": [ 00:08:01.214 { 00:08:01.214 "method": "sock_set_default_impl", 00:08:01.214 "params": { 00:08:01.214 "impl_name": "posix" 00:08:01.214 } 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "method": "sock_impl_set_options", 00:08:01.214 "params": { 00:08:01.214 "impl_name": "ssl", 00:08:01.214 "recv_buf_size": 4096, 00:08:01.214 "send_buf_size": 4096, 00:08:01.214 "enable_recv_pipe": true, 00:08:01.214 "enable_quickack": false, 00:08:01.214 "enable_placement_id": 0, 00:08:01.214 "enable_zerocopy_send_server": true, 00:08:01.214 "enable_zerocopy_send_client": false, 00:08:01.214 "zerocopy_threshold": 0, 00:08:01.214 "tls_version": 0, 00:08:01.214 "enable_ktls": false 00:08:01.214 } 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "method": "sock_impl_set_options", 00:08:01.214 "params": { 00:08:01.214 "impl_name": "posix", 00:08:01.214 "recv_buf_size": 2097152, 00:08:01.214 "send_buf_size": 2097152, 00:08:01.214 "enable_recv_pipe": true, 00:08:01.214 "enable_quickack": false, 00:08:01.214 "enable_placement_id": 0, 00:08:01.214 "enable_zerocopy_send_server": true, 00:08:01.214 "enable_zerocopy_send_client": false, 00:08:01.214 "zerocopy_threshold": 0, 00:08:01.214 "tls_version": 0, 00:08:01.214 "enable_ktls": false 00:08:01.214 } 00:08:01.214 } 00:08:01.214 ] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "vmd", 00:08:01.214 "config": [] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "accel", 00:08:01.214 "config": [ 00:08:01.214 { 00:08:01.214 "method": "accel_set_options", 00:08:01.214 "params": { 00:08:01.214 "small_cache_size": 128, 00:08:01.214 "large_cache_size": 16, 00:08:01.214 "task_count": 2048, 00:08:01.214 "sequence_count": 2048, 00:08:01.214 "buf_count": 2048 00:08:01.214 } 00:08:01.214 } 00:08:01.214 ] 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "subsystem": "bdev", 00:08:01.214 "config": [ 00:08:01.214 { 00:08:01.214 "method": "bdev_set_options", 00:08:01.214 "params": { 00:08:01.214 "bdev_io_pool_size": 65535, 00:08:01.214 "bdev_io_cache_size": 256, 00:08:01.214 "bdev_auto_examine": true, 00:08:01.214 "iobuf_small_cache_size": 128, 00:08:01.214 "iobuf_large_cache_size": 16 00:08:01.214 } 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "method": "bdev_raid_set_options", 00:08:01.214 "params": { 00:08:01.214 "process_window_size_kb": 1024, 00:08:01.214 "process_max_bandwidth_mb_sec": 0 00:08:01.214 } 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "method": "bdev_iscsi_set_options", 00:08:01.214 "params": { 00:08:01.214 "timeout_sec": 30 00:08:01.214 } 00:08:01.214 }, 00:08:01.214 { 00:08:01.214 "method": "bdev_nvme_set_options", 00:08:01.214 "params": { 00:08:01.214 "action_on_timeout": "none", 00:08:01.215 "timeout_us": 0, 00:08:01.215 "timeout_admin_us": 0, 00:08:01.215 "keep_alive_timeout_ms": 10000, 00:08:01.215 "arbitration_burst": 0, 00:08:01.215 "low_priority_weight": 0, 00:08:01.215 "medium_priority_weight": 0, 00:08:01.215 "high_priority_weight": 0, 00:08:01.215 "nvme_adminq_poll_period_us": 10000, 00:08:01.215 "nvme_ioq_poll_period_us": 0, 00:08:01.215 "io_queue_requests": 0, 00:08:01.215 "delay_cmd_submit": true, 00:08:01.215 "transport_retry_count": 4, 00:08:01.215 "bdev_retry_count": 3, 00:08:01.215 "transport_ack_timeout": 0, 00:08:01.215 "ctrlr_loss_timeout_sec": 0, 00:08:01.215 "reconnect_delay_sec": 0, 00:08:01.215 "fast_io_fail_timeout_sec": 0, 00:08:01.215 "disable_auto_failback": false, 00:08:01.215 "generate_uuids": false, 00:08:01.215 "transport_tos": 0, 00:08:01.215 "nvme_error_stat": false, 00:08:01.215 "rdma_srq_size": 0, 00:08:01.215 "io_path_stat": false, 00:08:01.215 "allow_accel_sequence": false, 00:08:01.215 "rdma_max_cq_size": 0, 00:08:01.215 "rdma_cm_event_timeout_ms": 0, 00:08:01.215 "dhchap_digests": [ 00:08:01.215 "sha256", 00:08:01.215 "sha384", 00:08:01.215 "sha512" 00:08:01.215 ], 00:08:01.215 "dhchap_dhgroups": [ 00:08:01.215 "null", 00:08:01.215 "ffdhe2048", 00:08:01.215 "ffdhe3072", 00:08:01.215 "ffdhe4096", 00:08:01.215 "ffdhe6144", 00:08:01.215 "ffdhe8192" 00:08:01.215 ] 00:08:01.215 } 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "method": "bdev_nvme_set_hotplug", 00:08:01.215 "params": { 00:08:01.215 "period_us": 100000, 00:08:01.215 "enable": false 00:08:01.215 } 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "method": "bdev_wait_for_examine" 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "scsi", 00:08:01.215 "config": null 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "scheduler", 00:08:01.215 "config": [ 00:08:01.215 { 00:08:01.215 "method": "framework_set_scheduler", 00:08:01.215 "params": { 00:08:01.215 "name": "static" 00:08:01.215 } 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "vhost_scsi", 00:08:01.215 "config": [] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "vhost_blk", 00:08:01.215 "config": [] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "ublk", 00:08:01.215 "config": [] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "nbd", 00:08:01.215 "config": [] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "nvmf", 00:08:01.215 "config": [ 00:08:01.215 { 00:08:01.215 "method": "nvmf_set_config", 00:08:01.215 "params": { 00:08:01.215 "discovery_filter": "match_any", 00:08:01.215 "admin_cmd_passthru": { 00:08:01.215 "identify_ctrlr": false 00:08:01.215 }, 00:08:01.215 "dhchap_digests": [ 00:08:01.215 "sha256", 00:08:01.215 "sha384", 00:08:01.215 "sha512" 00:08:01.215 ], 00:08:01.215 "dhchap_dhgroups": [ 00:08:01.215 "null", 00:08:01.215 "ffdhe2048", 00:08:01.215 "ffdhe3072", 00:08:01.215 "ffdhe4096", 00:08:01.215 "ffdhe6144", 00:08:01.215 "ffdhe8192" 00:08:01.215 ] 00:08:01.215 } 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "method": "nvmf_set_max_subsystems", 00:08:01.215 "params": { 00:08:01.215 "max_subsystems": 1024 00:08:01.215 } 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "method": "nvmf_set_crdt", 00:08:01.215 "params": { 00:08:01.215 "crdt1": 0, 00:08:01.215 "crdt2": 0, 00:08:01.215 "crdt3": 0 00:08:01.215 } 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "method": "nvmf_create_transport", 00:08:01.215 "params": { 00:08:01.215 "trtype": "TCP", 00:08:01.215 "max_queue_depth": 128, 00:08:01.215 "max_io_qpairs_per_ctrlr": 127, 00:08:01.215 "in_capsule_data_size": 4096, 00:08:01.215 "max_io_size": 131072, 00:08:01.215 "io_unit_size": 131072, 00:08:01.215 "max_aq_depth": 128, 00:08:01.215 "num_shared_buffers": 511, 00:08:01.215 "buf_cache_size": 4294967295, 00:08:01.215 "dif_insert_or_strip": false, 00:08:01.215 "zcopy": false, 00:08:01.215 "c2h_success": true, 00:08:01.215 "sock_priority": 0, 00:08:01.215 "abort_timeout_sec": 1, 00:08:01.215 "ack_timeout": 0, 00:08:01.215 "data_wr_pool_size": 0 00:08:01.215 } 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "subsystem": "iscsi", 00:08:01.215 "config": [ 00:08:01.215 { 00:08:01.215 "method": "iscsi_set_options", 00:08:01.215 "params": { 00:08:01.215 "node_base": "iqn.2016-06.io.spdk", 00:08:01.215 "max_sessions": 128, 00:08:01.215 "max_connections_per_session": 2, 00:08:01.215 "max_queue_depth": 64, 00:08:01.215 "default_time2wait": 2, 00:08:01.215 "default_time2retain": 20, 00:08:01.215 "first_burst_length": 8192, 00:08:01.215 "immediate_data": true, 00:08:01.215 "allow_duplicated_isid": false, 00:08:01.215 "error_recovery_level": 0, 00:08:01.215 "nop_timeout": 60, 00:08:01.215 "nop_in_interval": 30, 00:08:01.215 "disable_chap": false, 00:08:01.215 "require_chap": false, 00:08:01.215 "mutual_chap": false, 00:08:01.215 "chap_group": 0, 00:08:01.215 "max_large_datain_per_connection": 64, 00:08:01.215 "max_r2t_per_connection": 4, 00:08:01.215 "pdu_pool_size": 36864, 00:08:01.215 "immediate_data_pool_size": 16384, 00:08:01.215 "data_out_pool_size": 2048 00:08:01.215 } 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 } 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 918519 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 918519 ']' 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 918519 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 918519 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 918519' 00:08:01.215 killing process with pid 918519 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 918519 00:08:01.215 17:25:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 918519 00:08:01.783 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=918734 00:08:01.783 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:01.783 17:25:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 918734 ']' 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 918734' 00:08:07.056 killing process with pid 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 918734 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:07.056 00:08:07.056 real 0m6.278s 00:08:07.056 user 0m5.978s 00:08:07.056 sys 0m0.592s 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.056 17:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:07.056 ************************************ 00:08:07.056 END TEST skip_rpc_with_json 00:08:07.056 ************************************ 00:08:07.056 17:25:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.056 ************************************ 00:08:07.056 START TEST skip_rpc_with_delay 00:08:07.056 ************************************ 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.056 [2024-10-14 17:25:06.127084] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.056 00:08:07.056 real 0m0.069s 00:08:07.056 user 0m0.048s 00:08:07.056 sys 0m0.021s 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.056 17:25:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:07.056 ************************************ 00:08:07.056 END TEST skip_rpc_with_delay 00:08:07.056 ************************************ 00:08:07.056 17:25:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:07.056 17:25:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:07.056 17:25:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.056 17:25:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.316 ************************************ 00:08:07.316 START TEST exit_on_failed_rpc_init 00:08:07.316 ************************************ 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=919705 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 919705 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 919705 ']' 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.316 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:07.316 [2024-10-14 17:25:06.269960] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:07.316 [2024-10-14 17:25:06.270004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919705 ] 00:08:07.316 [2024-10-14 17:25:06.339957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.316 [2024-10-14 17:25:06.380127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:07.575 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.575 [2024-10-14 17:25:06.661611] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:07.575 [2024-10-14 17:25:06.661661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919743 ] 00:08:07.834 [2024-10-14 17:25:06.731479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.834 [2024-10-14 17:25:06.772472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.834 [2024-10-14 17:25:06.772550] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:07.834 [2024-10-14 17:25:06.772560] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:07.834 [2024-10-14 17:25:06.772566] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 919705 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 919705 ']' 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 919705 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919705 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919705' 00:08:07.834 killing process with pid 919705 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 919705 00:08:07.834 17:25:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 919705 00:08:08.094 00:08:08.094 real 0m0.951s 00:08:08.094 user 0m1.009s 00:08:08.094 sys 0m0.396s 00:08:08.094 17:25:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.094 17:25:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:08.094 ************************************ 00:08:08.094 END TEST exit_on_failed_rpc_init 00:08:08.094 ************************************ 00:08:08.094 17:25:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:08.094 00:08:08.094 real 0m13.124s 00:08:08.094 user 0m12.345s 00:08:08.094 sys 0m1.585s 00:08:08.094 17:25:07 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.094 17:25:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.094 ************************************ 00:08:08.094 END TEST skip_rpc 00:08:08.094 ************************************ 00:08:08.353 17:25:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:08.353 17:25:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.353 17:25:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.353 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.353 ************************************ 00:08:08.353 START TEST rpc_client 00:08:08.353 ************************************ 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:08.353 * Looking for test storage... 00:08:08.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.353 17:25:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.353 --rc genhtml_branch_coverage=1 00:08:08.353 --rc genhtml_function_coverage=1 00:08:08.353 --rc genhtml_legend=1 00:08:08.353 --rc geninfo_all_blocks=1 00:08:08.353 --rc geninfo_unexecuted_blocks=1 00:08:08.353 00:08:08.353 ' 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.353 --rc genhtml_branch_coverage=1 00:08:08.353 --rc genhtml_function_coverage=1 00:08:08.353 --rc genhtml_legend=1 00:08:08.353 --rc geninfo_all_blocks=1 00:08:08.353 --rc geninfo_unexecuted_blocks=1 00:08:08.353 00:08:08.353 ' 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.353 --rc genhtml_branch_coverage=1 00:08:08.353 --rc genhtml_function_coverage=1 00:08:08.353 --rc genhtml_legend=1 00:08:08.353 --rc geninfo_all_blocks=1 00:08:08.353 --rc geninfo_unexecuted_blocks=1 00:08:08.353 00:08:08.353 ' 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.353 --rc genhtml_branch_coverage=1 00:08:08.353 --rc genhtml_function_coverage=1 00:08:08.353 --rc genhtml_legend=1 00:08:08.353 --rc geninfo_all_blocks=1 00:08:08.353 --rc geninfo_unexecuted_blocks=1 00:08:08.353 00:08:08.353 ' 00:08:08.353 17:25:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:08.353 OK 00:08:08.353 17:25:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:08.353 00:08:08.353 real 0m0.188s 00:08:08.353 user 0m0.109s 00:08:08.353 sys 0m0.093s 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.353 17:25:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:08.353 ************************************ 00:08:08.353 END TEST rpc_client 00:08:08.353 ************************************ 00:08:08.613 17:25:07 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:08.613 17:25:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.613 17:25:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.613 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.613 ************************************ 00:08:08.613 START TEST json_config 00:08:08.613 ************************************ 00:08:08.613 17:25:07 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:08.613 17:25:07 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.613 17:25:07 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.613 17:25:07 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.613 17:25:07 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.613 17:25:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.614 17:25:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.614 17:25:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.614 17:25:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.614 17:25:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.614 17:25:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.614 17:25:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:08.614 17:25:07 json_config -- scripts/common.sh@345 -- # : 1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.614 17:25:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.614 17:25:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@353 -- # local d=1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.614 17:25:07 json_config -- scripts/common.sh@355 -- # echo 1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.614 17:25:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@353 -- # local d=2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.614 17:25:07 json_config -- scripts/common.sh@355 -- # echo 2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.614 17:25:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.614 17:25:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.614 17:25:07 json_config -- scripts/common.sh@368 -- # return 0 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.614 --rc genhtml_branch_coverage=1 00:08:08.614 --rc genhtml_function_coverage=1 00:08:08.614 --rc genhtml_legend=1 00:08:08.614 --rc geninfo_all_blocks=1 00:08:08.614 --rc geninfo_unexecuted_blocks=1 00:08:08.614 00:08:08.614 ' 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.614 --rc genhtml_branch_coverage=1 00:08:08.614 --rc genhtml_function_coverage=1 00:08:08.614 --rc genhtml_legend=1 00:08:08.614 --rc geninfo_all_blocks=1 00:08:08.614 --rc geninfo_unexecuted_blocks=1 00:08:08.614 00:08:08.614 ' 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.614 --rc genhtml_branch_coverage=1 00:08:08.614 --rc genhtml_function_coverage=1 00:08:08.614 --rc genhtml_legend=1 00:08:08.614 --rc geninfo_all_blocks=1 00:08:08.614 --rc geninfo_unexecuted_blocks=1 00:08:08.614 00:08:08.614 ' 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.614 --rc genhtml_branch_coverage=1 00:08:08.614 --rc genhtml_function_coverage=1 00:08:08.614 --rc genhtml_legend=1 00:08:08.614 --rc geninfo_all_blocks=1 00:08:08.614 --rc geninfo_unexecuted_blocks=1 00:08:08.614 00:08:08.614 ' 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.614 17:25:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.614 17:25:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.614 17:25:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.614 17:25:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.614 17:25:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.614 17:25:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.614 17:25:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.614 17:25:07 json_config -- paths/export.sh@5 -- # export PATH 00:08:08.614 17:25:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@51 -- # : 0 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.614 17:25:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:08.614 INFO: JSON configuration test init 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.614 17:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.614 17:25:07 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:08.614 17:25:07 json_config -- json_config/common.sh@9 -- # local app=target 00:08:08.614 17:25:07 json_config -- json_config/common.sh@10 -- # shift 00:08:08.614 17:25:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:08.614 17:25:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:08.614 17:25:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:08.614 17:25:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:08.614 17:25:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:08.614 17:25:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=920074 00:08:08.614 17:25:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:08.614 Waiting for target to run... 00:08:08.614 17:25:07 json_config -- json_config/common.sh@25 -- # waitforlisten 920074 /var/tmp/spdk_tgt.sock 00:08:08.615 17:25:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 920074 ']' 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:08.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.615 17:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:08.873 [2024-10-14 17:25:07.785379] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:08.873 [2024-10-14 17:25:07.785423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920074 ] 00:08:09.132 [2024-10-14 17:25:08.070368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.132 [2024-10-14 17:25:08.108379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:09.700 17:25:08 json_config -- json_config/common.sh@26 -- # echo '' 00:08:09.700 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.700 17:25:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:09.700 17:25:08 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:09.700 17:25:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:12.987 17:25:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.987 17:25:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:12.987 17:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@54 -- # sort 00:08:12.987 17:25:11 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:12.988 17:25:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.988 17:25:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:12.988 17:25:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.988 17:25:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:12.988 17:25:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:12.988 17:25:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:13.247 MallocForNvmf0 00:08:13.247 17:25:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:13.247 17:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:13.247 MallocForNvmf1 00:08:13.506 17:25:12 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:13.506 17:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:13.506 [2024-10-14 17:25:12.572769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.506 17:25:12 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.506 17:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.764 17:25:12 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:13.765 17:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:14.023 17:25:12 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:14.023 17:25:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:14.282 17:25:13 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:14.282 17:25:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:14.282 [2024-10-14 17:25:13.335122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:14.282 17:25:13 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:14.282 17:25:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.282 17:25:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:14.282 17:25:13 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:14.282 17:25:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.282 17:25:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:14.541 17:25:13 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:14.541 17:25:13 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:14.541 17:25:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:14.541 MallocBdevForConfigChangeCheck 00:08:14.541 17:25:13 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:14.541 17:25:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.541 17:25:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:14.541 17:25:13 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:14.541 17:25:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:15.152 17:25:14 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:15.152 INFO: shutting down applications... 00:08:15.152 17:25:14 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:15.152 17:25:14 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:15.152 17:25:14 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:15.152 17:25:14 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:17.688 Calling clear_iscsi_subsystem 00:08:17.688 Calling clear_nvmf_subsystem 00:08:17.688 Calling clear_nbd_subsystem 00:08:17.688 Calling clear_ublk_subsystem 00:08:17.688 Calling clear_vhost_blk_subsystem 00:08:17.688 Calling clear_vhost_scsi_subsystem 00:08:17.688 Calling clear_bdev_subsystem 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@352 -- # break 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:17.688 17:25:16 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:17.688 17:25:16 json_config -- json_config/common.sh@31 -- # local app=target 00:08:17.688 17:25:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:17.688 17:25:16 json_config -- json_config/common.sh@35 -- # [[ -n 920074 ]] 00:08:17.688 17:25:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 920074 00:08:17.688 17:25:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:17.688 17:25:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:17.688 17:25:16 json_config -- json_config/common.sh@41 -- # kill -0 920074 00:08:17.688 17:25:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:18.257 17:25:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:18.257 17:25:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:18.257 17:25:17 json_config -- json_config/common.sh@41 -- # kill -0 920074 00:08:18.257 17:25:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:18.257 17:25:17 json_config -- json_config/common.sh@43 -- # break 00:08:18.257 17:25:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:18.257 17:25:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:18.257 SPDK target shutdown done 00:08:18.257 17:25:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:18.257 INFO: relaunching applications... 00:08:18.257 17:25:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:18.257 17:25:17 json_config -- json_config/common.sh@9 -- # local app=target 00:08:18.257 17:25:17 json_config -- json_config/common.sh@10 -- # shift 00:08:18.257 17:25:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:18.257 17:25:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:18.257 17:25:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:18.257 17:25:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.257 17:25:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.257 17:25:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=921812 00:08:18.257 17:25:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:18.257 Waiting for target to run... 00:08:18.257 17:25:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:18.257 17:25:17 json_config -- json_config/common.sh@25 -- # waitforlisten 921812 /var/tmp/spdk_tgt.sock 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@831 -- # '[' -z 921812 ']' 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:18.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.257 17:25:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 [2024-10-14 17:25:17.160471] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:18.257 [2024-10-14 17:25:17.160527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921812 ] 00:08:18.516 [2024-10-14 17:25:17.607535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.775 [2024-10-14 17:25:17.663532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.070 [2024-10-14 17:25:20.698648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.070 [2024-10-14 17:25:20.730983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:22.330 17:25:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.330 17:25:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:22.330 17:25:21 json_config -- json_config/common.sh@26 -- # echo '' 00:08:22.330 00:08:22.330 17:25:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:22.330 17:25:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:22.330 INFO: Checking if target configuration is the same... 00:08:22.330 17:25:21 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:22.330 17:25:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:22.330 17:25:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.330 + '[' 2 -ne 2 ']' 00:08:22.330 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:22.330 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:22.330 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:22.330 +++ basename /dev/fd/62 00:08:22.330 ++ mktemp /tmp/62.XXX 00:08:22.330 + tmp_file_1=/tmp/62.rqX 00:08:22.330 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:22.330 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:22.330 + tmp_file_2=/tmp/spdk_tgt_config.json.kRt 00:08:22.330 + ret=0 00:08:22.330 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:22.589 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:22.848 + diff -u /tmp/62.rqX /tmp/spdk_tgt_config.json.kRt 00:08:22.848 + echo 'INFO: JSON config files are the same' 00:08:22.848 INFO: JSON config files are the same 00:08:22.848 + rm /tmp/62.rqX /tmp/spdk_tgt_config.json.kRt 00:08:22.848 + exit 0 00:08:22.848 17:25:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:22.848 17:25:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:22.848 INFO: changing configuration and checking if this can be detected... 00:08:22.848 17:25:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:22.848 17:25:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:22.848 17:25:21 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:22.848 17:25:21 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:22.848 17:25:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:22.848 + '[' 2 -ne 2 ']' 00:08:22.848 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:22.848 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:22.848 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:22.848 +++ basename /dev/fd/62 00:08:22.848 ++ mktemp /tmp/62.XXX 00:08:22.848 + tmp_file_1=/tmp/62.f3m 00:08:22.848 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:22.848 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:23.107 + tmp_file_2=/tmp/spdk_tgt_config.json.EFI 00:08:23.107 + ret=0 00:08:23.107 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:23.366 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:23.366 + diff -u /tmp/62.f3m /tmp/spdk_tgt_config.json.EFI 00:08:23.366 + ret=1 00:08:23.366 + echo '=== Start of file: /tmp/62.f3m ===' 00:08:23.366 + cat /tmp/62.f3m 00:08:23.366 + echo '=== End of file: /tmp/62.f3m ===' 00:08:23.366 + echo '' 00:08:23.366 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EFI ===' 00:08:23.366 + cat /tmp/spdk_tgt_config.json.EFI 00:08:23.366 + echo '=== End of file: /tmp/spdk_tgt_config.json.EFI ===' 00:08:23.366 + echo '' 00:08:23.366 + rm /tmp/62.f3m /tmp/spdk_tgt_config.json.EFI 00:08:23.366 + exit 1 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:23.366 INFO: configuration change detected. 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 921812 ]] 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:23.366 17:25:22 json_config -- json_config/json_config.sh@330 -- # killprocess 921812 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@950 -- # '[' -z 921812 ']' 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@954 -- # kill -0 921812 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@955 -- # uname 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 921812 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.366 17:25:22 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.367 17:25:22 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 921812' 00:08:23.367 killing process with pid 921812 00:08:23.367 17:25:22 json_config -- common/autotest_common.sh@969 -- # kill 921812 00:08:23.367 17:25:22 json_config -- common/autotest_common.sh@974 -- # wait 921812 00:08:25.905 17:25:24 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:25.905 17:25:24 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:25.905 17:25:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.905 17:25:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.905 17:25:24 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:25.905 17:25:24 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:25.905 INFO: Success 00:08:25.905 00:08:25.905 real 0m16.956s 00:08:25.905 user 0m17.582s 00:08:25.905 sys 0m2.549s 00:08:25.905 17:25:24 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.905 17:25:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.905 ************************************ 00:08:25.905 END TEST json_config 00:08:25.905 ************************************ 00:08:25.905 17:25:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:25.905 17:25:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.905 17:25:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.905 17:25:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.905 ************************************ 00:08:25.905 START TEST json_config_extra_key 00:08:25.905 ************************************ 00:08:25.905 17:25:24 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:25.905 17:25:24 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.905 17:25:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.905 17:25:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.905 17:25:24 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.905 17:25:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.906 --rc genhtml_branch_coverage=1 00:08:25.906 --rc genhtml_function_coverage=1 00:08:25.906 --rc genhtml_legend=1 00:08:25.906 --rc geninfo_all_blocks=1 00:08:25.906 --rc geninfo_unexecuted_blocks=1 00:08:25.906 00:08:25.906 ' 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.906 --rc genhtml_branch_coverage=1 00:08:25.906 --rc genhtml_function_coverage=1 00:08:25.906 --rc genhtml_legend=1 00:08:25.906 --rc geninfo_all_blocks=1 00:08:25.906 --rc geninfo_unexecuted_blocks=1 00:08:25.906 00:08:25.906 ' 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.906 --rc genhtml_branch_coverage=1 00:08:25.906 --rc genhtml_function_coverage=1 00:08:25.906 --rc genhtml_legend=1 00:08:25.906 --rc geninfo_all_blocks=1 00:08:25.906 --rc geninfo_unexecuted_blocks=1 00:08:25.906 00:08:25.906 ' 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.906 --rc genhtml_branch_coverage=1 00:08:25.906 --rc genhtml_function_coverage=1 00:08:25.906 --rc genhtml_legend=1 00:08:25.906 --rc geninfo_all_blocks=1 00:08:25.906 --rc geninfo_unexecuted_blocks=1 00:08:25.906 00:08:25.906 ' 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.906 17:25:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.906 17:25:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.906 17:25:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.906 17:25:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.906 17:25:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:25.906 17:25:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.906 17:25:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:25.906 INFO: launching applications... 00:08:25.906 17:25:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=923259 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:25.906 Waiting for target to run... 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 923259 /var/tmp/spdk_tgt.sock 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 923259 ']' 00:08:25.906 17:25:24 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:25.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.906 17:25:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:25.906 [2024-10-14 17:25:24.797381] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:25.906 [2024-10-14 17:25:24.797431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923259 ] 00:08:26.165 [2024-10-14 17:25:25.075647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.165 [2024-10-14 17:25:25.109040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.732 17:25:25 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.732 17:25:25 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:26.732 00:08:26.732 17:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:26.732 INFO: shutting down applications... 00:08:26.732 17:25:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 923259 ]] 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 923259 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 923259 00:08:26.732 17:25:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 923259 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:26.991 17:25:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:26.991 SPDK target shutdown done 00:08:26.991 17:25:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:26.991 Success 00:08:26.991 00:08:26.991 real 0m1.564s 00:08:26.991 user 0m1.333s 00:08:26.991 sys 0m0.404s 00:08:26.991 17:25:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.991 17:25:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:26.991 ************************************ 00:08:26.991 END TEST json_config_extra_key 00:08:26.991 ************************************ 00:08:27.251 17:25:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.251 17:25:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.251 17:25:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.251 17:25:26 -- common/autotest_common.sh@10 -- # set +x 00:08:27.251 ************************************ 00:08:27.251 START TEST alias_rpc 00:08:27.251 ************************************ 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:27.251 * Looking for test storage... 00:08:27.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.251 17:25:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:27.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.251 --rc genhtml_branch_coverage=1 00:08:27.251 --rc genhtml_function_coverage=1 00:08:27.251 --rc genhtml_legend=1 00:08:27.251 --rc geninfo_all_blocks=1 00:08:27.251 --rc geninfo_unexecuted_blocks=1 00:08:27.251 00:08:27.251 ' 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:27.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.251 --rc genhtml_branch_coverage=1 00:08:27.251 --rc genhtml_function_coverage=1 00:08:27.251 --rc genhtml_legend=1 00:08:27.251 --rc geninfo_all_blocks=1 00:08:27.251 --rc geninfo_unexecuted_blocks=1 00:08:27.251 00:08:27.251 ' 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:27.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.251 --rc genhtml_branch_coverage=1 00:08:27.251 --rc genhtml_function_coverage=1 00:08:27.251 --rc genhtml_legend=1 00:08:27.251 --rc geninfo_all_blocks=1 00:08:27.251 --rc geninfo_unexecuted_blocks=1 00:08:27.251 00:08:27.251 ' 00:08:27.251 17:25:26 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:27.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.252 --rc genhtml_branch_coverage=1 00:08:27.252 --rc genhtml_function_coverage=1 00:08:27.252 --rc genhtml_legend=1 00:08:27.252 --rc geninfo_all_blocks=1 00:08:27.252 --rc geninfo_unexecuted_blocks=1 00:08:27.252 00:08:27.252 ' 00:08:27.252 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:27.252 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=923601 00:08:27.252 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:27.252 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 923601 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 923601 ']' 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.252 17:25:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.511 [2024-10-14 17:25:26.411990] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:27.511 [2024-10-14 17:25:26.412032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923601 ] 00:08:27.511 [2024-10-14 17:25:26.477500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.511 [2024-10-14 17:25:26.519690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.770 17:25:26 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.770 17:25:26 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.770 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:28.029 17:25:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 923601 00:08:28.029 17:25:26 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 923601 ']' 00:08:28.029 17:25:26 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 923601 00:08:28.029 17:25:26 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:28.029 17:25:26 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.029 17:25:26 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923601 00:08:28.029 17:25:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.029 17:25:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.029 17:25:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923601' 00:08:28.029 killing process with pid 923601 00:08:28.029 17:25:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 923601 00:08:28.029 17:25:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 923601 00:08:28.288 00:08:28.288 real 0m1.103s 00:08:28.288 user 0m1.135s 00:08:28.288 sys 0m0.401s 00:08:28.288 17:25:27 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.288 17:25:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.288 ************************************ 00:08:28.288 END TEST alias_rpc 00:08:28.288 ************************************ 00:08:28.288 17:25:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:28.288 17:25:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:28.288 17:25:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.288 17:25:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.288 17:25:27 -- common/autotest_common.sh@10 -- # set +x 00:08:28.288 ************************************ 00:08:28.288 START TEST spdkcli_tcp 00:08:28.288 ************************************ 00:08:28.288 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:28.548 * Looking for test storage... 00:08:28.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.548 17:25:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.548 --rc genhtml_branch_coverage=1 00:08:28.548 --rc genhtml_function_coverage=1 00:08:28.548 --rc genhtml_legend=1 00:08:28.548 --rc geninfo_all_blocks=1 00:08:28.548 --rc geninfo_unexecuted_blocks=1 00:08:28.548 00:08:28.548 ' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.548 --rc genhtml_branch_coverage=1 00:08:28.548 --rc genhtml_function_coverage=1 00:08:28.548 --rc genhtml_legend=1 00:08:28.548 --rc geninfo_all_blocks=1 00:08:28.548 --rc geninfo_unexecuted_blocks=1 00:08:28.548 00:08:28.548 ' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.548 --rc genhtml_branch_coverage=1 00:08:28.548 --rc genhtml_function_coverage=1 00:08:28.548 --rc genhtml_legend=1 00:08:28.548 --rc geninfo_all_blocks=1 00:08:28.548 --rc geninfo_unexecuted_blocks=1 00:08:28.548 00:08:28.548 ' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.548 --rc genhtml_branch_coverage=1 00:08:28.548 --rc genhtml_function_coverage=1 00:08:28.548 --rc genhtml_legend=1 00:08:28.548 --rc geninfo_all_blocks=1 00:08:28.548 --rc geninfo_unexecuted_blocks=1 00:08:28.548 00:08:28.548 ' 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=923869 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 923869 00:08:28.548 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 923869 ']' 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.548 17:25:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.548 [2024-10-14 17:25:27.601830] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:28.548 [2024-10-14 17:25:27.601884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923869 ] 00:08:28.548 [2024-10-14 17:25:27.670672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.807 [2024-10-14 17:25:27.712137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.807 [2024-10-14 17:25:27.712138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.807 17:25:27 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.807 17:25:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:28.807 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=923892 00:08:28.807 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:28.807 17:25:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:29.067 [ 00:08:29.067 "bdev_malloc_delete", 00:08:29.067 "bdev_malloc_create", 00:08:29.067 "bdev_null_resize", 00:08:29.067 "bdev_null_delete", 00:08:29.067 "bdev_null_create", 00:08:29.067 "bdev_nvme_cuse_unregister", 00:08:29.067 "bdev_nvme_cuse_register", 00:08:29.067 "bdev_opal_new_user", 00:08:29.067 "bdev_opal_set_lock_state", 00:08:29.067 "bdev_opal_delete", 00:08:29.067 "bdev_opal_get_info", 00:08:29.067 "bdev_opal_create", 00:08:29.067 "bdev_nvme_opal_revert", 00:08:29.067 "bdev_nvme_opal_init", 00:08:29.067 "bdev_nvme_send_cmd", 00:08:29.067 "bdev_nvme_set_keys", 00:08:29.067 "bdev_nvme_get_path_iostat", 00:08:29.067 "bdev_nvme_get_mdns_discovery_info", 00:08:29.067 "bdev_nvme_stop_mdns_discovery", 00:08:29.067 "bdev_nvme_start_mdns_discovery", 00:08:29.067 "bdev_nvme_set_multipath_policy", 00:08:29.067 "bdev_nvme_set_preferred_path", 00:08:29.067 "bdev_nvme_get_io_paths", 00:08:29.067 "bdev_nvme_remove_error_injection", 00:08:29.067 "bdev_nvme_add_error_injection", 00:08:29.067 "bdev_nvme_get_discovery_info", 00:08:29.067 "bdev_nvme_stop_discovery", 00:08:29.067 "bdev_nvme_start_discovery", 00:08:29.067 "bdev_nvme_get_controller_health_info", 00:08:29.067 "bdev_nvme_disable_controller", 00:08:29.067 "bdev_nvme_enable_controller", 00:08:29.067 "bdev_nvme_reset_controller", 00:08:29.067 "bdev_nvme_get_transport_statistics", 00:08:29.067 "bdev_nvme_apply_firmware", 00:08:29.067 "bdev_nvme_detach_controller", 00:08:29.067 "bdev_nvme_get_controllers", 00:08:29.067 "bdev_nvme_attach_controller", 00:08:29.067 "bdev_nvme_set_hotplug", 00:08:29.067 "bdev_nvme_set_options", 00:08:29.067 "bdev_passthru_delete", 00:08:29.067 "bdev_passthru_create", 00:08:29.067 "bdev_lvol_set_parent_bdev", 00:08:29.067 "bdev_lvol_set_parent", 00:08:29.067 "bdev_lvol_check_shallow_copy", 00:08:29.067 "bdev_lvol_start_shallow_copy", 00:08:29.067 "bdev_lvol_grow_lvstore", 00:08:29.067 "bdev_lvol_get_lvols", 00:08:29.067 "bdev_lvol_get_lvstores", 00:08:29.067 "bdev_lvol_delete", 00:08:29.067 "bdev_lvol_set_read_only", 00:08:29.067 "bdev_lvol_resize", 00:08:29.067 "bdev_lvol_decouple_parent", 00:08:29.067 "bdev_lvol_inflate", 00:08:29.067 "bdev_lvol_rename", 00:08:29.067 "bdev_lvol_clone_bdev", 00:08:29.067 "bdev_lvol_clone", 00:08:29.067 "bdev_lvol_snapshot", 00:08:29.067 "bdev_lvol_create", 00:08:29.067 "bdev_lvol_delete_lvstore", 00:08:29.067 "bdev_lvol_rename_lvstore", 00:08:29.067 "bdev_lvol_create_lvstore", 00:08:29.067 "bdev_raid_set_options", 00:08:29.067 "bdev_raid_remove_base_bdev", 00:08:29.067 "bdev_raid_add_base_bdev", 00:08:29.067 "bdev_raid_delete", 00:08:29.067 "bdev_raid_create", 00:08:29.067 "bdev_raid_get_bdevs", 00:08:29.067 "bdev_error_inject_error", 00:08:29.067 "bdev_error_delete", 00:08:29.067 "bdev_error_create", 00:08:29.067 "bdev_split_delete", 00:08:29.067 "bdev_split_create", 00:08:29.067 "bdev_delay_delete", 00:08:29.067 "bdev_delay_create", 00:08:29.067 "bdev_delay_update_latency", 00:08:29.067 "bdev_zone_block_delete", 00:08:29.067 "bdev_zone_block_create", 00:08:29.067 "blobfs_create", 00:08:29.067 "blobfs_detect", 00:08:29.067 "blobfs_set_cache_size", 00:08:29.067 "bdev_aio_delete", 00:08:29.067 "bdev_aio_rescan", 00:08:29.067 "bdev_aio_create", 00:08:29.067 "bdev_ftl_set_property", 00:08:29.067 "bdev_ftl_get_properties", 00:08:29.067 "bdev_ftl_get_stats", 00:08:29.067 "bdev_ftl_unmap", 00:08:29.067 "bdev_ftl_unload", 00:08:29.067 "bdev_ftl_delete", 00:08:29.067 "bdev_ftl_load", 00:08:29.067 "bdev_ftl_create", 00:08:29.067 "bdev_virtio_attach_controller", 00:08:29.067 "bdev_virtio_scsi_get_devices", 00:08:29.067 "bdev_virtio_detach_controller", 00:08:29.067 "bdev_virtio_blk_set_hotplug", 00:08:29.067 "bdev_iscsi_delete", 00:08:29.067 "bdev_iscsi_create", 00:08:29.067 "bdev_iscsi_set_options", 00:08:29.067 "accel_error_inject_error", 00:08:29.067 "ioat_scan_accel_module", 00:08:29.067 "dsa_scan_accel_module", 00:08:29.067 "iaa_scan_accel_module", 00:08:29.067 "vfu_virtio_create_fs_endpoint", 00:08:29.067 "vfu_virtio_create_scsi_endpoint", 00:08:29.067 "vfu_virtio_scsi_remove_target", 00:08:29.067 "vfu_virtio_scsi_add_target", 00:08:29.067 "vfu_virtio_create_blk_endpoint", 00:08:29.067 "vfu_virtio_delete_endpoint", 00:08:29.067 "keyring_file_remove_key", 00:08:29.067 "keyring_file_add_key", 00:08:29.067 "keyring_linux_set_options", 00:08:29.067 "fsdev_aio_delete", 00:08:29.067 "fsdev_aio_create", 00:08:29.067 "iscsi_get_histogram", 00:08:29.067 "iscsi_enable_histogram", 00:08:29.067 "iscsi_set_options", 00:08:29.067 "iscsi_get_auth_groups", 00:08:29.067 "iscsi_auth_group_remove_secret", 00:08:29.068 "iscsi_auth_group_add_secret", 00:08:29.068 "iscsi_delete_auth_group", 00:08:29.068 "iscsi_create_auth_group", 00:08:29.068 "iscsi_set_discovery_auth", 00:08:29.068 "iscsi_get_options", 00:08:29.068 "iscsi_target_node_request_logout", 00:08:29.068 "iscsi_target_node_set_redirect", 00:08:29.068 "iscsi_target_node_set_auth", 00:08:29.068 "iscsi_target_node_add_lun", 00:08:29.068 "iscsi_get_stats", 00:08:29.068 "iscsi_get_connections", 00:08:29.068 "iscsi_portal_group_set_auth", 00:08:29.068 "iscsi_start_portal_group", 00:08:29.068 "iscsi_delete_portal_group", 00:08:29.068 "iscsi_create_portal_group", 00:08:29.068 "iscsi_get_portal_groups", 00:08:29.068 "iscsi_delete_target_node", 00:08:29.068 "iscsi_target_node_remove_pg_ig_maps", 00:08:29.068 "iscsi_target_node_add_pg_ig_maps", 00:08:29.068 "iscsi_create_target_node", 00:08:29.068 "iscsi_get_target_nodes", 00:08:29.068 "iscsi_delete_initiator_group", 00:08:29.068 "iscsi_initiator_group_remove_initiators", 00:08:29.068 "iscsi_initiator_group_add_initiators", 00:08:29.068 "iscsi_create_initiator_group", 00:08:29.068 "iscsi_get_initiator_groups", 00:08:29.068 "nvmf_set_crdt", 00:08:29.068 "nvmf_set_config", 00:08:29.068 "nvmf_set_max_subsystems", 00:08:29.068 "nvmf_stop_mdns_prr", 00:08:29.068 "nvmf_publish_mdns_prr", 00:08:29.068 "nvmf_subsystem_get_listeners", 00:08:29.068 "nvmf_subsystem_get_qpairs", 00:08:29.068 "nvmf_subsystem_get_controllers", 00:08:29.068 "nvmf_get_stats", 00:08:29.068 "nvmf_get_transports", 00:08:29.068 "nvmf_create_transport", 00:08:29.068 "nvmf_get_targets", 00:08:29.068 "nvmf_delete_target", 00:08:29.068 "nvmf_create_target", 00:08:29.068 "nvmf_subsystem_allow_any_host", 00:08:29.068 "nvmf_subsystem_set_keys", 00:08:29.068 "nvmf_subsystem_remove_host", 00:08:29.068 "nvmf_subsystem_add_host", 00:08:29.068 "nvmf_ns_remove_host", 00:08:29.068 "nvmf_ns_add_host", 00:08:29.068 "nvmf_subsystem_remove_ns", 00:08:29.068 "nvmf_subsystem_set_ns_ana_group", 00:08:29.068 "nvmf_subsystem_add_ns", 00:08:29.068 "nvmf_subsystem_listener_set_ana_state", 00:08:29.068 "nvmf_discovery_get_referrals", 00:08:29.068 "nvmf_discovery_remove_referral", 00:08:29.068 "nvmf_discovery_add_referral", 00:08:29.068 "nvmf_subsystem_remove_listener", 00:08:29.068 "nvmf_subsystem_add_listener", 00:08:29.068 "nvmf_delete_subsystem", 00:08:29.068 "nvmf_create_subsystem", 00:08:29.068 "nvmf_get_subsystems", 00:08:29.068 "env_dpdk_get_mem_stats", 00:08:29.068 "nbd_get_disks", 00:08:29.068 "nbd_stop_disk", 00:08:29.068 "nbd_start_disk", 00:08:29.068 "ublk_recover_disk", 00:08:29.068 "ublk_get_disks", 00:08:29.068 "ublk_stop_disk", 00:08:29.068 "ublk_start_disk", 00:08:29.068 "ublk_destroy_target", 00:08:29.068 "ublk_create_target", 00:08:29.068 "virtio_blk_create_transport", 00:08:29.068 "virtio_blk_get_transports", 00:08:29.068 "vhost_controller_set_coalescing", 00:08:29.068 "vhost_get_controllers", 00:08:29.068 "vhost_delete_controller", 00:08:29.068 "vhost_create_blk_controller", 00:08:29.068 "vhost_scsi_controller_remove_target", 00:08:29.068 "vhost_scsi_controller_add_target", 00:08:29.068 "vhost_start_scsi_controller", 00:08:29.068 "vhost_create_scsi_controller", 00:08:29.068 "thread_set_cpumask", 00:08:29.068 "scheduler_set_options", 00:08:29.068 "framework_get_governor", 00:08:29.068 "framework_get_scheduler", 00:08:29.068 "framework_set_scheduler", 00:08:29.068 "framework_get_reactors", 00:08:29.068 "thread_get_io_channels", 00:08:29.068 "thread_get_pollers", 00:08:29.068 "thread_get_stats", 00:08:29.068 "framework_monitor_context_switch", 00:08:29.068 "spdk_kill_instance", 00:08:29.068 "log_enable_timestamps", 00:08:29.068 "log_get_flags", 00:08:29.068 "log_clear_flag", 00:08:29.068 "log_set_flag", 00:08:29.068 "log_get_level", 00:08:29.068 "log_set_level", 00:08:29.068 "log_get_print_level", 00:08:29.068 "log_set_print_level", 00:08:29.068 "framework_enable_cpumask_locks", 00:08:29.068 "framework_disable_cpumask_locks", 00:08:29.068 "framework_wait_init", 00:08:29.068 "framework_start_init", 00:08:29.068 "scsi_get_devices", 00:08:29.068 "bdev_get_histogram", 00:08:29.068 "bdev_enable_histogram", 00:08:29.068 "bdev_set_qos_limit", 00:08:29.068 "bdev_set_qd_sampling_period", 00:08:29.068 "bdev_get_bdevs", 00:08:29.068 "bdev_reset_iostat", 00:08:29.068 "bdev_get_iostat", 00:08:29.068 "bdev_examine", 00:08:29.068 "bdev_wait_for_examine", 00:08:29.068 "bdev_set_options", 00:08:29.068 "accel_get_stats", 00:08:29.068 "accel_set_options", 00:08:29.068 "accel_set_driver", 00:08:29.068 "accel_crypto_key_destroy", 00:08:29.068 "accel_crypto_keys_get", 00:08:29.068 "accel_crypto_key_create", 00:08:29.068 "accel_assign_opc", 00:08:29.068 "accel_get_module_info", 00:08:29.068 "accel_get_opc_assignments", 00:08:29.068 "vmd_rescan", 00:08:29.068 "vmd_remove_device", 00:08:29.068 "vmd_enable", 00:08:29.068 "sock_get_default_impl", 00:08:29.068 "sock_set_default_impl", 00:08:29.068 "sock_impl_set_options", 00:08:29.068 "sock_impl_get_options", 00:08:29.068 "iobuf_get_stats", 00:08:29.068 "iobuf_set_options", 00:08:29.068 "keyring_get_keys", 00:08:29.068 "vfu_tgt_set_base_path", 00:08:29.068 "framework_get_pci_devices", 00:08:29.068 "framework_get_config", 00:08:29.068 "framework_get_subsystems", 00:08:29.068 "fsdev_set_opts", 00:08:29.068 "fsdev_get_opts", 00:08:29.068 "trace_get_info", 00:08:29.068 "trace_get_tpoint_group_mask", 00:08:29.068 "trace_disable_tpoint_group", 00:08:29.068 "trace_enable_tpoint_group", 00:08:29.068 "trace_clear_tpoint_mask", 00:08:29.068 "trace_set_tpoint_mask", 00:08:29.068 "notify_get_notifications", 00:08:29.068 "notify_get_types", 00:08:29.068 "spdk_get_version", 00:08:29.068 "rpc_get_methods" 00:08:29.068 ] 00:08:29.068 17:25:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.068 17:25:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:29.068 17:25:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 923869 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 923869 ']' 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 923869 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.068 17:25:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 923869 00:08:29.327 17:25:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.327 17:25:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.327 17:25:28 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 923869' 00:08:29.327 killing process with pid 923869 00:08:29.327 17:25:28 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 923869 00:08:29.327 17:25:28 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 923869 00:08:29.586 00:08:29.586 real 0m1.138s 00:08:29.586 user 0m1.923s 00:08:29.586 sys 0m0.432s 00:08:29.586 17:25:28 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.586 17:25:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.586 ************************************ 00:08:29.586 END TEST spdkcli_tcp 00:08:29.586 ************************************ 00:08:29.586 17:25:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.586 17:25:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.586 17:25:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.586 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:08:29.586 ************************************ 00:08:29.586 START TEST dpdk_mem_utility 00:08:29.586 ************************************ 00:08:29.586 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.586 * Looking for test storage... 00:08:29.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:29.586 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.586 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.586 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.845 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.845 17:25:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:29.845 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.845 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.845 --rc genhtml_branch_coverage=1 00:08:29.845 --rc genhtml_function_coverage=1 00:08:29.845 --rc genhtml_legend=1 00:08:29.845 --rc geninfo_all_blocks=1 00:08:29.845 --rc geninfo_unexecuted_blocks=1 00:08:29.845 00:08:29.845 ' 00:08:29.845 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.845 --rc genhtml_branch_coverage=1 00:08:29.845 --rc genhtml_function_coverage=1 00:08:29.845 --rc genhtml_legend=1 00:08:29.845 --rc geninfo_all_blocks=1 00:08:29.845 --rc geninfo_unexecuted_blocks=1 00:08:29.845 00:08:29.845 ' 00:08:29.845 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.845 --rc genhtml_branch_coverage=1 00:08:29.846 --rc genhtml_function_coverage=1 00:08:29.846 --rc genhtml_legend=1 00:08:29.846 --rc geninfo_all_blocks=1 00:08:29.846 --rc geninfo_unexecuted_blocks=1 00:08:29.846 00:08:29.846 ' 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.846 --rc genhtml_branch_coverage=1 00:08:29.846 --rc genhtml_function_coverage=1 00:08:29.846 --rc genhtml_legend=1 00:08:29.846 --rc geninfo_all_blocks=1 00:08:29.846 --rc geninfo_unexecuted_blocks=1 00:08:29.846 00:08:29.846 ' 00:08:29.846 17:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:29.846 17:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=924061 00:08:29.846 17:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 924061 00:08:29.846 17:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 924061 ']' 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.846 17:25:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:29.846 [2024-10-14 17:25:28.812726] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:29.846 [2024-10-14 17:25:28.812773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924061 ] 00:08:29.846 [2024-10-14 17:25:28.882003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.846 [2024-10-14 17:25:28.924154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.106 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.106 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:30.106 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:30.106 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:30.106 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.106 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:30.106 { 00:08:30.106 "filename": "/tmp/spdk_mem_dump.txt" 00:08:30.106 } 00:08:30.106 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.106 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:30.106 DPDK memory size 810.000000 MiB in 1 heap(s) 00:08:30.106 1 heaps totaling size 810.000000 MiB 00:08:30.106 size: 810.000000 MiB heap id: 0 00:08:30.106 end heaps---------- 00:08:30.106 9 mempools totaling size 595.772034 MiB 00:08:30.106 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:30.106 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:30.106 size: 92.545471 MiB name: bdev_io_924061 00:08:30.106 size: 50.003479 MiB name: msgpool_924061 00:08:30.106 size: 36.509338 MiB name: fsdev_io_924061 00:08:30.106 size: 21.763794 MiB name: PDU_Pool 00:08:30.106 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:30.106 size: 4.133484 MiB name: evtpool_924061 00:08:30.106 size: 0.026123 MiB name: Session_Pool 00:08:30.106 end mempools------- 00:08:30.106 6 memzones totaling size 4.142822 MiB 00:08:30.106 size: 1.000366 MiB name: RG_ring_0_924061 00:08:30.106 size: 1.000366 MiB name: RG_ring_1_924061 00:08:30.106 size: 1.000366 MiB name: RG_ring_4_924061 00:08:30.106 size: 1.000366 MiB name: RG_ring_5_924061 00:08:30.106 size: 0.125366 MiB name: RG_ring_2_924061 00:08:30.106 size: 0.015991 MiB name: RG_ring_3_924061 00:08:30.106 end memzones------- 00:08:30.106 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:30.106 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:30.106 list of free elements. size: 10.862488 MiB 00:08:30.106 element at address: 0x200018a00000 with size: 0.999878 MiB 00:08:30.106 element at address: 0x200018c00000 with size: 0.999878 MiB 00:08:30.106 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:30.106 element at address: 0x200031800000 with size: 0.994446 MiB 00:08:30.106 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:30.106 element at address: 0x200012c00000 with size: 0.954285 MiB 00:08:30.106 element at address: 0x200018e00000 with size: 0.936584 MiB 00:08:30.106 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:30.106 element at address: 0x20001a600000 with size: 0.582886 MiB 00:08:30.106 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:30.106 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:30.106 element at address: 0x200019000000 with size: 0.485657 MiB 00:08:30.106 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:30.106 element at address: 0x200027a00000 with size: 0.410034 MiB 00:08:30.106 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:30.106 list of standard malloc elements. size: 199.218628 MiB 00:08:30.106 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:30.106 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:30.106 element at address: 0x200018afff80 with size: 1.000122 MiB 00:08:30.106 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:08:30.106 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:30.106 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:30.106 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:08:30.106 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:30.106 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:08:30.106 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:08:30.106 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20001a695380 with size: 0.000183 MiB 00:08:30.106 element at address: 0x20001a695440 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200027a69040 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:08:30.106 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:08:30.106 list of memzone associated elements. size: 599.918884 MiB 00:08:30.106 element at address: 0x20001a695500 with size: 211.416748 MiB 00:08:30.106 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:30.106 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:08:30.106 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:30.106 element at address: 0x200012df4780 with size: 92.045044 MiB 00:08:30.106 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_924061_0 00:08:30.106 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:30.106 associated memzone info: size: 48.002930 MiB name: MP_msgpool_924061_0 00:08:30.106 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:30.106 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_924061_0 00:08:30.106 element at address: 0x2000191be940 with size: 20.255554 MiB 00:08:30.106 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:30.106 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:08:30.106 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:30.106 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:30.106 associated memzone info: size: 3.000122 MiB name: MP_evtpool_924061_0 00:08:30.106 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:30.106 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_924061 00:08:30.106 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:30.106 associated memzone info: size: 1.007996 MiB name: MP_evtpool_924061 00:08:30.106 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:30.106 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:30.106 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:08:30.106 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:30.106 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:30.106 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:30.106 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:30.106 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:30.106 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:30.106 associated memzone info: size: 1.000366 MiB name: RG_ring_0_924061 00:08:30.106 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:30.106 associated memzone info: size: 1.000366 MiB name: RG_ring_1_924061 00:08:30.106 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:08:30.106 associated memzone info: size: 1.000366 MiB name: RG_ring_4_924061 00:08:30.107 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:08:30.107 associated memzone info: size: 1.000366 MiB name: RG_ring_5_924061 00:08:30.107 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:30.107 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_924061 00:08:30.107 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:30.107 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_924061 00:08:30.107 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:30.107 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:30.107 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:30.107 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:30.107 element at address: 0x20001907c540 with size: 0.250488 MiB 00:08:30.107 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:30.107 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:30.107 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_924061 00:08:30.107 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:30.107 associated memzone info: size: 0.125366 MiB name: RG_ring_2_924061 00:08:30.107 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:30.107 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:30.107 element at address: 0x200027a69100 with size: 0.023743 MiB 00:08:30.107 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:30.107 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:30.107 associated memzone info: size: 0.015991 MiB name: RG_ring_3_924061 00:08:30.107 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:08:30.107 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:30.107 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:30.107 associated memzone info: size: 0.000183 MiB name: MP_msgpool_924061 00:08:30.107 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:30.107 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_924061 00:08:30.107 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:30.107 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_924061 00:08:30.107 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:08:30.107 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:30.107 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:30.107 17:25:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 924061 00:08:30.107 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 924061 ']' 00:08:30.107 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 924061 00:08:30.107 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:30.107 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.107 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 924061 00:08:30.366 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.366 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.366 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 924061' 00:08:30.366 killing process with pid 924061 00:08:30.366 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 924061 00:08:30.366 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 924061 00:08:30.624 00:08:30.624 real 0m1.000s 00:08:30.624 user 0m0.926s 00:08:30.624 sys 0m0.409s 00:08:30.624 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.624 17:25:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 ************************************ 00:08:30.624 END TEST dpdk_mem_utility 00:08:30.624 ************************************ 00:08:30.624 17:25:29 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:30.624 17:25:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.624 17:25:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.624 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:08:30.624 ************************************ 00:08:30.624 START TEST event 00:08:30.624 ************************************ 00:08:30.624 17:25:29 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:30.624 * Looking for test storage... 00:08:30.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:30.624 17:25:29 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.624 17:25:29 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.624 17:25:29 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.883 17:25:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.883 17:25:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.883 17:25:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.883 17:25:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.883 17:25:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.883 17:25:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.883 17:25:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.883 17:25:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.883 17:25:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.883 17:25:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.883 17:25:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.883 17:25:29 event -- scripts/common.sh@344 -- # case "$op" in 00:08:30.883 17:25:29 event -- scripts/common.sh@345 -- # : 1 00:08:30.883 17:25:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.883 17:25:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.883 17:25:29 event -- scripts/common.sh@365 -- # decimal 1 00:08:30.883 17:25:29 event -- scripts/common.sh@353 -- # local d=1 00:08:30.883 17:25:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.883 17:25:29 event -- scripts/common.sh@355 -- # echo 1 00:08:30.883 17:25:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.883 17:25:29 event -- scripts/common.sh@366 -- # decimal 2 00:08:30.883 17:25:29 event -- scripts/common.sh@353 -- # local d=2 00:08:30.883 17:25:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.883 17:25:29 event -- scripts/common.sh@355 -- # echo 2 00:08:30.883 17:25:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.883 17:25:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.883 17:25:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.883 17:25:29 event -- scripts/common.sh@368 -- # return 0 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.883 --rc genhtml_branch_coverage=1 00:08:30.883 --rc genhtml_function_coverage=1 00:08:30.883 --rc genhtml_legend=1 00:08:30.883 --rc geninfo_all_blocks=1 00:08:30.883 --rc geninfo_unexecuted_blocks=1 00:08:30.883 00:08:30.883 ' 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.883 --rc genhtml_branch_coverage=1 00:08:30.883 --rc genhtml_function_coverage=1 00:08:30.883 --rc genhtml_legend=1 00:08:30.883 --rc geninfo_all_blocks=1 00:08:30.883 --rc geninfo_unexecuted_blocks=1 00:08:30.883 00:08:30.883 ' 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.883 --rc genhtml_branch_coverage=1 00:08:30.883 --rc genhtml_function_coverage=1 00:08:30.883 --rc genhtml_legend=1 00:08:30.883 --rc geninfo_all_blocks=1 00:08:30.883 --rc geninfo_unexecuted_blocks=1 00:08:30.883 00:08:30.883 ' 00:08:30.883 17:25:29 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.883 --rc genhtml_branch_coverage=1 00:08:30.884 --rc genhtml_function_coverage=1 00:08:30.884 --rc genhtml_legend=1 00:08:30.884 --rc geninfo_all_blocks=1 00:08:30.884 --rc geninfo_unexecuted_blocks=1 00:08:30.884 00:08:30.884 ' 00:08:30.884 17:25:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:30.884 17:25:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:30.884 17:25:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:30.884 17:25:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:30.884 17:25:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.884 17:25:29 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.884 ************************************ 00:08:30.884 START TEST event_perf 00:08:30.884 ************************************ 00:08:30.884 17:25:29 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:30.884 Running I/O for 1 seconds...[2024-10-14 17:25:29.866977] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:30.884 [2024-10-14 17:25:29.867045] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924271 ] 00:08:30.884 [2024-10-14 17:25:29.936780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.884 [2024-10-14 17:25:29.980095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.884 [2024-10-14 17:25:29.980204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.884 [2024-10-14 17:25:29.980313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.884 [2024-10-14 17:25:29.980313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.263 Running I/O for 1 seconds... 00:08:32.263 lcore 0: 203931 00:08:32.263 lcore 1: 203931 00:08:32.263 lcore 2: 203930 00:08:32.263 lcore 3: 203931 00:08:32.263 done. 00:08:32.263 00:08:32.263 real 0m1.175s 00:08:32.263 user 0m4.084s 00:08:32.263 sys 0m0.087s 00:08:32.263 17:25:31 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.263 17:25:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 ************************************ 00:08:32.263 END TEST event_perf 00:08:32.263 ************************************ 00:08:32.263 17:25:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:32.263 17:25:31 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:32.263 17:25:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.263 17:25:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 ************************************ 00:08:32.263 START TEST event_reactor 00:08:32.263 ************************************ 00:08:32.263 17:25:31 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:32.263 [2024-10-14 17:25:31.116346] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:32.263 [2024-10-14 17:25:31.116413] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924521 ] 00:08:32.263 [2024-10-14 17:25:31.190938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.263 [2024-10-14 17:25:31.232625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.202 test_start 00:08:33.202 oneshot 00:08:33.202 tick 100 00:08:33.202 tick 100 00:08:33.202 tick 250 00:08:33.202 tick 100 00:08:33.202 tick 100 00:08:33.202 tick 100 00:08:33.202 tick 250 00:08:33.202 tick 500 00:08:33.202 tick 100 00:08:33.202 tick 100 00:08:33.202 tick 250 00:08:33.202 tick 100 00:08:33.202 tick 100 00:08:33.202 test_end 00:08:33.202 00:08:33.202 real 0m1.175s 00:08:33.202 user 0m1.095s 00:08:33.202 sys 0m0.075s 00:08:33.202 17:25:32 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.202 17:25:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 ************************************ 00:08:33.202 END TEST event_reactor 00:08:33.202 ************************************ 00:08:33.202 17:25:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:33.202 17:25:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:33.202 17:25:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.202 17:25:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 ************************************ 00:08:33.202 START TEST event_reactor_perf 00:08:33.202 ************************************ 00:08:33.202 17:25:32 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:33.461 [2024-10-14 17:25:32.360937] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:33.461 [2024-10-14 17:25:32.361004] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924777 ] 00:08:33.461 [2024-10-14 17:25:32.433238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.461 [2024-10-14 17:25:32.473544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.398 test_start 00:08:34.398 test_end 00:08:34.398 Performance: 509193 events per second 00:08:34.398 00:08:34.398 real 0m1.175s 00:08:34.398 user 0m1.097s 00:08:34.398 sys 0m0.074s 00:08:34.398 17:25:33 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.398 17:25:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:34.399 ************************************ 00:08:34.399 END TEST event_reactor_perf 00:08:34.399 ************************************ 00:08:34.658 17:25:33 event -- event/event.sh@49 -- # uname -s 00:08:34.658 17:25:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:34.658 17:25:33 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:34.658 17:25:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.658 17:25:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.658 17:25:33 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.658 ************************************ 00:08:34.658 START TEST event_scheduler 00:08:34.658 ************************************ 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:34.658 * Looking for test storage... 00:08:34.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.658 17:25:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.658 --rc genhtml_branch_coverage=1 00:08:34.658 --rc genhtml_function_coverage=1 00:08:34.658 --rc genhtml_legend=1 00:08:34.658 --rc geninfo_all_blocks=1 00:08:34.658 --rc geninfo_unexecuted_blocks=1 00:08:34.658 00:08:34.658 ' 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.658 --rc genhtml_branch_coverage=1 00:08:34.658 --rc genhtml_function_coverage=1 00:08:34.658 --rc genhtml_legend=1 00:08:34.658 --rc geninfo_all_blocks=1 00:08:34.658 --rc geninfo_unexecuted_blocks=1 00:08:34.658 00:08:34.658 ' 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.658 --rc genhtml_branch_coverage=1 00:08:34.658 --rc genhtml_function_coverage=1 00:08:34.658 --rc genhtml_legend=1 00:08:34.658 --rc geninfo_all_blocks=1 00:08:34.658 --rc geninfo_unexecuted_blocks=1 00:08:34.658 00:08:34.658 ' 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.658 --rc genhtml_branch_coverage=1 00:08:34.658 --rc genhtml_function_coverage=1 00:08:34.658 --rc genhtml_legend=1 00:08:34.658 --rc geninfo_all_blocks=1 00:08:34.658 --rc geninfo_unexecuted_blocks=1 00:08:34.658 00:08:34.658 ' 00:08:34.658 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:34.658 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=925058 00:08:34.658 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:34.658 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:34.658 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 925058 00:08:34.658 17:25:33 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 925058 ']' 00:08:34.659 17:25:33 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.659 17:25:33 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.659 17:25:33 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.659 17:25:33 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.659 17:25:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.949 [2024-10-14 17:25:33.808857] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:34.949 [2024-10-14 17:25:33.808903] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925058 ] 00:08:34.949 [2024-10-14 17:25:33.877863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.949 [2024-10-14 17:25:33.920162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.949 [2024-10-14 17:25:33.920272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.949 [2024-10-14 17:25:33.920356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.949 [2024-10-14 17:25:33.920356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:34.949 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.949 [2024-10-14 17:25:33.968964] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:34.949 [2024-10-14 17:25:33.968982] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:34.949 [2024-10-14 17:25:33.968995] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:34.949 [2024-10-14 17:25:33.969000] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:34.949 [2024-10-14 17:25:33.969005] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.949 17:25:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.949 17:25:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.949 [2024-10-14 17:25:34.045817] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:34.949 17:25:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.949 17:25:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:34.949 17:25:34 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.949 17:25:34 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.949 17:25:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 ************************************ 00:08:35.251 START TEST scheduler_create_thread 00:08:35.251 ************************************ 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 2 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 3 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 4 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 5 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 6 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 7 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 8 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 9 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 10 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:35.252 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:35.252 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.591 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.591 17:25:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:35.591 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.591 17:25:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.995 17:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.995 17:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:36.995 17:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:36.995 17:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.995 17:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.373 17:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.373 00:08:38.373 real 0m3.101s 00:08:38.373 user 0m0.026s 00:08:38.373 sys 0m0.004s 00:08:38.373 17:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.373 17:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.373 ************************************ 00:08:38.373 END TEST scheduler_create_thread 00:08:38.373 ************************************ 00:08:38.373 17:25:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:38.373 17:25:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 925058 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 925058 ']' 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 925058 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925058 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925058' 00:08:38.373 killing process with pid 925058 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 925058 00:08:38.373 17:25:37 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 925058 00:08:38.633 [2024-10-14 17:25:37.561180] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:38.633 00:08:38.633 real 0m4.155s 00:08:38.633 user 0m6.666s 00:08:38.633 sys 0m0.357s 00:08:38.633 17:25:37 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.633 17:25:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:38.633 ************************************ 00:08:38.633 END TEST event_scheduler 00:08:38.633 ************************************ 00:08:38.892 17:25:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:38.892 17:25:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:38.892 17:25:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.892 17:25:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.892 17:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:38.892 ************************************ 00:08:38.892 START TEST app_repeat 00:08:38.892 ************************************ 00:08:38.892 17:25:37 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=925809 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 925809' 00:08:38.892 Process app_repeat pid: 925809 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:38.892 spdk_app_start Round 0 00:08:38.892 17:25:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925809 /var/tmp/spdk-nbd.sock 00:08:38.892 17:25:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 925809 ']' 00:08:38.893 17:25:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:38.893 17:25:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.893 17:25:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:38.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:38.893 17:25:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.893 17:25:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:38.893 [2024-10-14 17:25:37.845299] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:38.893 [2024-10-14 17:25:37.845351] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925809 ] 00:08:38.893 [2024-10-14 17:25:37.915826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.893 [2024-10-14 17:25:37.956153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.893 [2024-10-14 17:25:37.956154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.151 17:25:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.152 17:25:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:39.152 17:25:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:39.152 Malloc0 00:08:39.152 17:25:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:39.411 Malloc1 00:08:39.411 17:25:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:39.411 17:25:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:39.670 /dev/nbd0 00:08:39.670 17:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:39.670 17:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:39.670 1+0 records in 00:08:39.670 1+0 records out 00:08:39.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190572 s, 21.5 MB/s 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:39.670 17:25:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:39.670 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.670 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:39.670 17:25:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:39.929 /dev/nbd1 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:39.929 1+0 records in 00:08:39.929 1+0 records out 00:08:39.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213664 s, 19.2 MB/s 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:39.929 17:25:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.929 17:25:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:40.188 { 00:08:40.188 "nbd_device": "/dev/nbd0", 00:08:40.188 "bdev_name": "Malloc0" 00:08:40.188 }, 00:08:40.188 { 00:08:40.188 "nbd_device": "/dev/nbd1", 00:08:40.188 "bdev_name": "Malloc1" 00:08:40.188 } 00:08:40.188 ]' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:40.188 { 00:08:40.188 "nbd_device": "/dev/nbd0", 00:08:40.188 "bdev_name": "Malloc0" 00:08:40.188 }, 00:08:40.188 { 00:08:40.188 "nbd_device": "/dev/nbd1", 00:08:40.188 "bdev_name": "Malloc1" 00:08:40.188 } 00:08:40.188 ]' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:40.188 /dev/nbd1' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:40.188 /dev/nbd1' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:40.188 17:25:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:40.189 256+0 records in 00:08:40.189 256+0 records out 00:08:40.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108299 s, 96.8 MB/s 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:40.189 256+0 records in 00:08:40.189 256+0 records out 00:08:40.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140448 s, 74.7 MB/s 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:40.189 256+0 records in 00:08:40.189 256+0 records out 00:08:40.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152341 s, 68.8 MB/s 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.189 17:25:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.448 17:25:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.707 17:25:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:40.966 17:25:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:40.966 17:25:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:41.226 17:25:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:41.226 [2024-10-14 17:25:40.289630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.226 [2024-10-14 17:25:40.326873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.226 [2024-10-14 17:25:40.326874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.485 [2024-10-14 17:25:40.367938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:41.485 [2024-10-14 17:25:40.367976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:44.019 17:25:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:44.019 17:25:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:44.019 spdk_app_start Round 1 00:08:44.019 17:25:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925809 /var/tmp/spdk-nbd.sock 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 925809 ']' 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:44.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.019 17:25:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:44.277 17:25:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.277 17:25:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:44.277 17:25:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.536 Malloc0 00:08:44.536 17:25:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.795 Malloc1 00:08:44.795 17:25:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.795 17:25:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.795 /dev/nbd0 00:08:45.054 17:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.054 17:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:45.054 1+0 records in 00:08:45.054 1+0 records out 00:08:45.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186188 s, 22.0 MB/s 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.054 17:25:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:45.054 17:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.054 17:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.054 17:25:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:45.054 /dev/nbd1 00:08:45.054 17:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:45.313 1+0 records in 00:08:45.313 1+0 records out 00:08:45.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239957 s, 17.1 MB/s 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:45.313 17:25:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.313 { 00:08:45.313 "nbd_device": "/dev/nbd0", 00:08:45.313 "bdev_name": "Malloc0" 00:08:45.313 }, 00:08:45.313 { 00:08:45.313 "nbd_device": "/dev/nbd1", 00:08:45.313 "bdev_name": "Malloc1" 00:08:45.313 } 00:08:45.313 ]' 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.313 { 00:08:45.313 "nbd_device": "/dev/nbd0", 00:08:45.313 "bdev_name": "Malloc0" 00:08:45.313 }, 00:08:45.313 { 00:08:45.313 "nbd_device": "/dev/nbd1", 00:08:45.313 "bdev_name": "Malloc1" 00:08:45.313 } 00:08:45.313 ]' 00:08:45.313 17:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:45.573 /dev/nbd1' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:45.573 /dev/nbd1' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:45.573 256+0 records in 00:08:45.573 256+0 records out 00:08:45.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363314 s, 289 MB/s 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:45.573 256+0 records in 00:08:45.573 256+0 records out 00:08:45.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136119 s, 77.0 MB/s 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.573 256+0 records in 00:08:45.573 256+0 records out 00:08:45.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146559 s, 71.5 MB/s 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.573 17:25:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.832 17:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.091 17:25:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.091 17:25:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:46.350 17:25:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:46.609 [2024-10-14 17:25:45.571395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.609 [2024-10-14 17:25:45.608061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.609 [2024-10-14 17:25:45.608062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.609 [2024-10-14 17:25:45.648566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:46.609 [2024-10-14 17:25:45.648609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:49.902 17:25:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.902 17:25:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:49.902 spdk_app_start Round 2 00:08:49.902 17:25:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925809 /var/tmp/spdk-nbd.sock 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 925809 ']' 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.902 17:25:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:49.902 17:25:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.902 Malloc0 00:08:49.902 17:25:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.902 Malloc1 00:08:49.902 17:25:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.902 17:25:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.903 17:25:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:50.162 /dev/nbd0 00:08:50.162 17:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.162 17:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.162 1+0 records in 00:08:50.162 1+0 records out 00:08:50.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244992 s, 16.7 MB/s 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:50.162 17:25:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:50.162 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.162 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.162 17:25:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:50.421 /dev/nbd1 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.421 1+0 records in 00:08:50.421 1+0 records out 00:08:50.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237224 s, 17.3 MB/s 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:50.421 17:25:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.421 17:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:50.680 { 00:08:50.680 "nbd_device": "/dev/nbd0", 00:08:50.680 "bdev_name": "Malloc0" 00:08:50.680 }, 00:08:50.680 { 00:08:50.680 "nbd_device": "/dev/nbd1", 00:08:50.680 "bdev_name": "Malloc1" 00:08:50.680 } 00:08:50.680 ]' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:50.680 { 00:08:50.680 "nbd_device": "/dev/nbd0", 00:08:50.680 "bdev_name": "Malloc0" 00:08:50.680 }, 00:08:50.680 { 00:08:50.680 "nbd_device": "/dev/nbd1", 00:08:50.680 "bdev_name": "Malloc1" 00:08:50.680 } 00:08:50.680 ]' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:50.680 /dev/nbd1' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:50.680 /dev/nbd1' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:50.680 256+0 records in 00:08:50.680 256+0 records out 00:08:50.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010834 s, 96.8 MB/s 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:50.680 256+0 records in 00:08:50.680 256+0 records out 00:08:50.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145534 s, 72.1 MB/s 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:50.680 17:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:50.939 256+0 records in 00:08:50.939 256+0 records out 00:08:50.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148344 s, 70.7 MB/s 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.939 17:25:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.939 17:25:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.199 17:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.458 17:25:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.458 17:25:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:51.717 17:25:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:51.976 [2024-10-14 17:25:50.889269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.976 [2024-10-14 17:25:50.926520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.976 [2024-10-14 17:25:50.926521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.976 [2024-10-14 17:25:50.967178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.976 [2024-10-14 17:25:50.967218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:55.262 17:25:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 925809 /var/tmp/spdk-nbd.sock 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 925809 ']' 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:55.262 17:25:53 event.app_repeat -- event/event.sh@39 -- # killprocess 925809 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 925809 ']' 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 925809 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 925809 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.262 17:25:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.262 17:25:54 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 925809' 00:08:55.262 killing process with pid 925809 00:08:55.262 17:25:54 event.app_repeat -- common/autotest_common.sh@969 -- # kill 925809 00:08:55.262 17:25:54 event.app_repeat -- common/autotest_common.sh@974 -- # wait 925809 00:08:55.262 spdk_app_start is called in Round 0. 00:08:55.262 Shutdown signal received, stop current app iteration 00:08:55.262 Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 reinitialization... 00:08:55.262 spdk_app_start is called in Round 1. 00:08:55.262 Shutdown signal received, stop current app iteration 00:08:55.262 Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 reinitialization... 00:08:55.262 spdk_app_start is called in Round 2. 00:08:55.262 Shutdown signal received, stop current app iteration 00:08:55.262 Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 reinitialization... 00:08:55.262 spdk_app_start is called in Round 3. 00:08:55.262 Shutdown signal received, stop current app iteration 00:08:55.262 17:25:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:55.263 17:25:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:55.263 00:08:55.263 real 0m16.325s 00:08:55.263 user 0m35.888s 00:08:55.263 sys 0m2.531s 00:08:55.263 17:25:54 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.263 17:25:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.263 ************************************ 00:08:55.263 END TEST app_repeat 00:08:55.263 ************************************ 00:08:55.263 17:25:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:55.263 17:25:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:55.263 17:25:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.263 17:25:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.263 17:25:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.263 ************************************ 00:08:55.263 START TEST cpu_locks 00:08:55.263 ************************************ 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:55.263 * Looking for test storage... 00:08:55.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.263 17:25:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.263 --rc genhtml_branch_coverage=1 00:08:55.263 --rc genhtml_function_coverage=1 00:08:55.263 --rc genhtml_legend=1 00:08:55.263 --rc geninfo_all_blocks=1 00:08:55.263 --rc geninfo_unexecuted_blocks=1 00:08:55.263 00:08:55.263 ' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.263 --rc genhtml_branch_coverage=1 00:08:55.263 --rc genhtml_function_coverage=1 00:08:55.263 --rc genhtml_legend=1 00:08:55.263 --rc geninfo_all_blocks=1 00:08:55.263 --rc geninfo_unexecuted_blocks=1 00:08:55.263 00:08:55.263 ' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.263 --rc genhtml_branch_coverage=1 00:08:55.263 --rc genhtml_function_coverage=1 00:08:55.263 --rc genhtml_legend=1 00:08:55.263 --rc geninfo_all_blocks=1 00:08:55.263 --rc geninfo_unexecuted_blocks=1 00:08:55.263 00:08:55.263 ' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.263 --rc genhtml_branch_coverage=1 00:08:55.263 --rc genhtml_function_coverage=1 00:08:55.263 --rc genhtml_legend=1 00:08:55.263 --rc geninfo_all_blocks=1 00:08:55.263 --rc geninfo_unexecuted_blocks=1 00:08:55.263 00:08:55.263 ' 00:08:55.263 17:25:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:55.263 17:25:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:55.263 17:25:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:55.263 17:25:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.263 17:25:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.522 ************************************ 00:08:55.522 START TEST default_locks 00:08:55.522 ************************************ 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=928801 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 928801 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 928801 ']' 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.522 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.522 [2024-10-14 17:25:54.472652] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:55.522 [2024-10-14 17:25:54.472692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928801 ] 00:08:55.522 [2024-10-14 17:25:54.539975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.522 [2024-10-14 17:25:54.581864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.780 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.780 17:25:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:55.780 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 928801 00:08:55.780 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 928801 00:08:55.780 17:25:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.038 lslocks: write error 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 928801 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 928801 ']' 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 928801 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.039 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928801 00:08:56.298 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.298 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.298 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928801' 00:08:56.298 killing process with pid 928801 00:08:56.298 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 928801 00:08:56.298 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 928801 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 928801 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 928801 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 928801 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 928801 ']' 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (928801) - No such process 00:08:56.557 ERROR: process (pid: 928801) is no longer running 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:56.557 00:08:56.557 real 0m1.102s 00:08:56.557 user 0m1.053s 00:08:56.557 sys 0m0.507s 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.557 17:25:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.557 ************************************ 00:08:56.557 END TEST default_locks 00:08:56.557 ************************************ 00:08:56.557 17:25:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:56.557 17:25:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.557 17:25:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.557 17:25:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:56.557 ************************************ 00:08:56.557 START TEST default_locks_via_rpc 00:08:56.557 ************************************ 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=929057 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 929057 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 929057 ']' 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.557 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.557 [2024-10-14 17:25:55.644580] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:56.557 [2024-10-14 17:25:55.644631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929057 ] 00:08:56.816 [2024-10-14 17:25:55.710439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.816 [2024-10-14 17:25:55.748419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 929057 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 929057 00:08:57.075 17:25:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 929057 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 929057 ']' 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 929057 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929057 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929057' 00:08:57.334 killing process with pid 929057 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 929057 00:08:57.334 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 929057 00:08:57.594 00:08:57.594 real 0m1.044s 00:08:57.594 user 0m0.994s 00:08:57.594 sys 0m0.484s 00:08:57.594 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.594 17:25:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.594 ************************************ 00:08:57.594 END TEST default_locks_via_rpc 00:08:57.594 ************************************ 00:08:57.594 17:25:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:57.594 17:25:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.594 17:25:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.594 17:25:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.594 ************************************ 00:08:57.594 START TEST non_locking_app_on_locked_coremask 00:08:57.594 ************************************ 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=929313 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 929313 /var/tmp/spdk.sock 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 929313 ']' 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.594 17:25:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.853 [2024-10-14 17:25:56.759357] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:57.853 [2024-10-14 17:25:56.759398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929313 ] 00:08:57.853 [2024-10-14 17:25:56.828658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.853 [2024-10-14 17:25:56.870621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=929322 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 929322 /var/tmp/spdk2.sock 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 929322 ']' 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:58.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.112 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:58.112 [2024-10-14 17:25:57.129250] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:08:58.112 [2024-10-14 17:25:57.129295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929322 ] 00:08:58.112 [2024-10-14 17:25:57.198173] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:58.112 [2024-10-14 17:25:57.198195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.371 [2024-10-14 17:25:57.285629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.947 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.947 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:58.947 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 929313 00:08:58.947 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:58.947 17:25:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 929313 00:08:59.521 lslocks: write error 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 929313 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 929313 ']' 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 929313 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929313 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929313' 00:08:59.521 killing process with pid 929313 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 929313 00:08:59.521 17:25:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 929313 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 929322 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 929322 ']' 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 929322 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929322 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929322' 00:09:00.089 killing process with pid 929322 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 929322 00:09:00.089 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 929322 00:09:00.348 00:09:00.348 real 0m2.767s 00:09:00.348 user 0m2.920s 00:09:00.348 sys 0m0.917s 00:09:00.348 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.348 17:25:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:00.348 ************************************ 00:09:00.348 END TEST non_locking_app_on_locked_coremask 00:09:00.348 ************************************ 00:09:00.607 17:25:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:00.607 17:25:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.607 17:25:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.607 17:25:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:00.607 ************************************ 00:09:00.607 START TEST locking_app_on_unlocked_coremask 00:09:00.607 ************************************ 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=929817 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 929817 /var/tmp/spdk.sock 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 929817 ']' 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.607 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:00.607 [2024-10-14 17:25:59.595542] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:00.607 [2024-10-14 17:25:59.595583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929817 ] 00:09:00.607 [2024-10-14 17:25:59.662180] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:00.607 [2024-10-14 17:25:59.662205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.607 [2024-10-14 17:25:59.701680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=929827 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 929827 /var/tmp/spdk2.sock 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 929827 ']' 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.867 17:25:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:00.867 [2024-10-14 17:25:59.975569] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:00.867 [2024-10-14 17:25:59.975619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929827 ] 00:09:01.126 [2024-10-14 17:26:00.050203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.126 [2024-10-14 17:26:00.139625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.693 17:26:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.693 17:26:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:01.693 17:26:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 929827 00:09:01.693 17:26:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 929827 00:09:01.693 17:26:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:02.261 lslocks: write error 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 929817 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 929817 ']' 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 929817 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929817 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.261 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.262 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929817' 00:09:02.262 killing process with pid 929817 00:09:02.262 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 929817 00:09:02.262 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 929817 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 929827 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 929827 ']' 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 929827 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929827 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929827' 00:09:02.829 killing process with pid 929827 00:09:02.829 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 929827 00:09:02.830 17:26:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 929827 00:09:03.089 00:09:03.089 real 0m2.607s 00:09:03.089 user 0m2.754s 00:09:03.089 sys 0m0.853s 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 ************************************ 00:09:03.089 END TEST locking_app_on_unlocked_coremask 00:09:03.089 ************************************ 00:09:03.089 17:26:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:03.089 17:26:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.089 17:26:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.089 17:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:03.089 ************************************ 00:09:03.089 START TEST locking_app_on_locked_coremask 00:09:03.089 ************************************ 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=930315 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 930315 /var/tmp/spdk.sock 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 930315 ']' 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.089 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.381 [2024-10-14 17:26:02.276768] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:03.381 [2024-10-14 17:26:02.276814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930315 ] 00:09:03.381 [2024-10-14 17:26:02.342698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.381 [2024-10-14 17:26:02.380114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=930320 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 930320 /var/tmp/spdk2.sock 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 930320 /var/tmp/spdk2.sock 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 930320 /var/tmp/spdk2.sock 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 930320 ']' 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.672 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:03.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:03.673 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.673 17:26:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.673 [2024-10-14 17:26:02.652257] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:03.673 [2024-10-14 17:26:02.652303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930320 ] 00:09:03.673 [2024-10-14 17:26:02.726444] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 930315 has claimed it. 00:09:03.673 [2024-10-14 17:26:02.726480] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:04.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (930320) - No such process 00:09:04.240 ERROR: process (pid: 930320) is no longer running 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 930315 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 930315 00:09:04.240 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:04.808 lslocks: write error 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 930315 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 930315 ']' 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 930315 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930315 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930315' 00:09:04.808 killing process with pid 930315 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 930315 00:09:04.808 17:26:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 930315 00:09:05.069 00:09:05.069 real 0m1.875s 00:09:05.069 user 0m1.996s 00:09:05.069 sys 0m0.668s 00:09:05.069 17:26:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.069 17:26:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.069 ************************************ 00:09:05.069 END TEST locking_app_on_locked_coremask 00:09:05.069 ************************************ 00:09:05.069 17:26:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:05.069 17:26:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.069 17:26:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.069 17:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.069 ************************************ 00:09:05.069 START TEST locking_overlapped_coremask 00:09:05.069 ************************************ 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=930586 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 930586 /var/tmp/spdk.sock 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 930586 ']' 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.069 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.328 [2024-10-14 17:26:04.223529] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:05.329 [2024-10-14 17:26:04.223576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930586 ] 00:09:05.329 [2024-10-14 17:26:04.291348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.329 [2024-10-14 17:26:04.335288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.329 [2024-10-14 17:26:04.335395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.329 [2024-10-14 17:26:04.335396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=930695 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 930695 /var/tmp/spdk2.sock 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 930695 /var/tmp/spdk2.sock 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 930695 /var/tmp/spdk2.sock 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 930695 ']' 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.588 17:26:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.588 [2024-10-14 17:26:04.608423] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:05.588 [2024-10-14 17:26:04.608487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930695 ] 00:09:05.588 [2024-10-14 17:26:04.687950] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 930586 has claimed it. 00:09:05.588 [2024-10-14 17:26:04.687988] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:06.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (930695) - No such process 00:09:06.154 ERROR: process (pid: 930695) is no longer running 00:09:06.154 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.154 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:06.154 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 930586 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 930586 ']' 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 930586 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.155 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930586 00:09:06.414 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.414 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.414 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930586' 00:09:06.414 killing process with pid 930586 00:09:06.414 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 930586 00:09:06.414 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 930586 00:09:06.673 00:09:06.673 real 0m1.447s 00:09:06.673 user 0m3.993s 00:09:06.673 sys 0m0.417s 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.673 ************************************ 00:09:06.673 END TEST locking_overlapped_coremask 00:09:06.673 ************************************ 00:09:06.673 17:26:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:06.673 17:26:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.673 17:26:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.673 17:26:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.673 ************************************ 00:09:06.673 START TEST locking_overlapped_coremask_via_rpc 00:09:06.673 ************************************ 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=930854 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 930854 /var/tmp/spdk.sock 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 930854 ']' 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.673 17:26:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.673 [2024-10-14 17:26:05.741478] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:06.673 [2024-10-14 17:26:05.741525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930854 ] 00:09:06.673 [2024-10-14 17:26:05.809936] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:06.673 [2024-10-14 17:26:05.809961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.932 [2024-10-14 17:26:05.852317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.932 [2024-10-14 17:26:05.852355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.932 [2024-10-14 17:26:05.852355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=931047 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 931047 /var/tmp/spdk2.sock 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 931047 ']' 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:07.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.192 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.192 [2024-10-14 17:26:06.127974] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:07.192 [2024-10-14 17:26:06.128027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931047 ] 00:09:07.192 [2024-10-14 17:26:06.204865] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:07.192 [2024-10-14 17:26:06.204894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.192 [2024-10-14 17:26:06.291882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.192 [2024-10-14 17:26:06.292003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.192 [2024-10-14 17:26:06.292003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.130 [2024-10-14 17:26:06.981668] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 930854 has claimed it. 00:09:08.130 request: 00:09:08.130 { 00:09:08.130 "method": "framework_enable_cpumask_locks", 00:09:08.130 "req_id": 1 00:09:08.130 } 00:09:08.130 Got JSON-RPC error response 00:09:08.130 response: 00:09:08.130 { 00:09:08.130 "code": -32603, 00:09:08.130 "message": "Failed to claim CPU core: 2" 00:09:08.130 } 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 930854 /var/tmp/spdk.sock 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 930854 ']' 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.130 17:26:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 931047 /var/tmp/spdk2.sock 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 931047 ']' 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.130 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:08.390 00:09:08.390 real 0m1.716s 00:09:08.390 user 0m0.839s 00:09:08.390 sys 0m0.131s 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.390 17:26:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.390 ************************************ 00:09:08.390 END TEST locking_overlapped_coremask_via_rpc 00:09:08.390 ************************************ 00:09:08.390 17:26:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:08.390 17:26:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 930854 ]] 00:09:08.390 17:26:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 930854 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 930854 ']' 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 930854 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930854 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930854' 00:09:08.390 killing process with pid 930854 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 930854 00:09:08.390 17:26:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 930854 00:09:08.959 17:26:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 931047 ]] 00:09:08.959 17:26:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 931047 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 931047 ']' 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 931047 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931047 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931047' 00:09:08.959 killing process with pid 931047 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 931047 00:09:08.959 17:26:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 931047 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 930854 ]] 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 930854 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 930854 ']' 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 930854 00:09:09.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (930854) - No such process 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 930854 is not found' 00:09:09.219 Process with pid 930854 is not found 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 931047 ]] 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 931047 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 931047 ']' 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 931047 00:09:09.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (931047) - No such process 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 931047 is not found' 00:09:09.219 Process with pid 931047 is not found 00:09:09.219 17:26:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:09.219 00:09:09.219 real 0m13.962s 00:09:09.219 user 0m24.373s 00:09:09.219 sys 0m4.941s 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.219 17:26:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:09.219 ************************************ 00:09:09.219 END TEST cpu_locks 00:09:09.219 ************************************ 00:09:09.219 00:09:09.219 real 0m38.567s 00:09:09.219 user 1m13.478s 00:09:09.219 sys 0m8.431s 00:09:09.219 17:26:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.219 17:26:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:09.219 ************************************ 00:09:09.219 END TEST event 00:09:09.219 ************************************ 00:09:09.219 17:26:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:09.219 17:26:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.219 17:26:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.219 17:26:08 -- common/autotest_common.sh@10 -- # set +x 00:09:09.219 ************************************ 00:09:09.219 START TEST thread 00:09:09.219 ************************************ 00:09:09.219 17:26:08 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:09.219 * Looking for test storage... 00:09:09.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:09.478 17:26:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.478 17:26:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.478 17:26:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.478 17:26:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.478 17:26:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.478 17:26:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.478 17:26:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.478 17:26:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.478 17:26:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.478 17:26:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.478 17:26:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.478 17:26:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:09.478 17:26:08 thread -- scripts/common.sh@345 -- # : 1 00:09:09.478 17:26:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.478 17:26:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.478 17:26:08 thread -- scripts/common.sh@365 -- # decimal 1 00:09:09.478 17:26:08 thread -- scripts/common.sh@353 -- # local d=1 00:09:09.478 17:26:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.478 17:26:08 thread -- scripts/common.sh@355 -- # echo 1 00:09:09.478 17:26:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.478 17:26:08 thread -- scripts/common.sh@366 -- # decimal 2 00:09:09.478 17:26:08 thread -- scripts/common.sh@353 -- # local d=2 00:09:09.478 17:26:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.478 17:26:08 thread -- scripts/common.sh@355 -- # echo 2 00:09:09.478 17:26:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.478 17:26:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.478 17:26:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.478 17:26:08 thread -- scripts/common.sh@368 -- # return 0 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.478 --rc genhtml_branch_coverage=1 00:09:09.478 --rc genhtml_function_coverage=1 00:09:09.478 --rc genhtml_legend=1 00:09:09.478 --rc geninfo_all_blocks=1 00:09:09.478 --rc geninfo_unexecuted_blocks=1 00:09:09.478 00:09:09.478 ' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.478 --rc genhtml_branch_coverage=1 00:09:09.478 --rc genhtml_function_coverage=1 00:09:09.478 --rc genhtml_legend=1 00:09:09.478 --rc geninfo_all_blocks=1 00:09:09.478 --rc geninfo_unexecuted_blocks=1 00:09:09.478 00:09:09.478 ' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.478 --rc genhtml_branch_coverage=1 00:09:09.478 --rc genhtml_function_coverage=1 00:09:09.478 --rc genhtml_legend=1 00:09:09.478 --rc geninfo_all_blocks=1 00:09:09.478 --rc geninfo_unexecuted_blocks=1 00:09:09.478 00:09:09.478 ' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:09.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.478 --rc genhtml_branch_coverage=1 00:09:09.478 --rc genhtml_function_coverage=1 00:09:09.478 --rc genhtml_legend=1 00:09:09.478 --rc geninfo_all_blocks=1 00:09:09.478 --rc geninfo_unexecuted_blocks=1 00:09:09.478 00:09:09.478 ' 00:09:09.478 17:26:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.478 17:26:08 thread -- common/autotest_common.sh@10 -- # set +x 00:09:09.478 ************************************ 00:09:09.478 START TEST thread_poller_perf 00:09:09.478 ************************************ 00:09:09.478 17:26:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:09.478 [2024-10-14 17:26:08.496939] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:09.479 [2024-10-14 17:26:08.497007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931427 ] 00:09:09.479 [2024-10-14 17:26:08.568729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.479 [2024-10-14 17:26:08.608718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.479 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:10.856 [2024-10-14T15:26:09.994Z] ====================================== 00:09:10.856 [2024-10-14T15:26:09.994Z] busy:2107086788 (cyc) 00:09:10.856 [2024-10-14T15:26:09.994Z] total_run_count: 422000 00:09:10.856 [2024-10-14T15:26:09.994Z] tsc_hz: 2100000000 (cyc) 00:09:10.856 [2024-10-14T15:26:09.994Z] ====================================== 00:09:10.856 [2024-10-14T15:26:09.994Z] poller_cost: 4993 (cyc), 2377 (nsec) 00:09:10.856 00:09:10.856 real 0m1.178s 00:09:10.856 user 0m1.097s 00:09:10.856 sys 0m0.077s 00:09:10.856 17:26:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.856 17:26:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:10.856 ************************************ 00:09:10.856 END TEST thread_poller_perf 00:09:10.856 ************************************ 00:09:10.856 17:26:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:10.856 17:26:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:10.856 17:26:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.856 17:26:09 thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.856 ************************************ 00:09:10.856 START TEST thread_poller_perf 00:09:10.856 ************************************ 00:09:10.856 17:26:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:10.856 [2024-10-14 17:26:09.742536] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:10.856 [2024-10-14 17:26:09.742616] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931674 ] 00:09:10.856 [2024-10-14 17:26:09.812247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.856 [2024-10-14 17:26:09.851686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.856 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:11.792 [2024-10-14T15:26:10.930Z] ====================================== 00:09:11.792 [2024-10-14T15:26:10.930Z] busy:2101558308 (cyc) 00:09:11.792 [2024-10-14T15:26:10.930Z] total_run_count: 5365000 00:09:11.792 [2024-10-14T15:26:10.930Z] tsc_hz: 2100000000 (cyc) 00:09:11.792 [2024-10-14T15:26:10.930Z] ====================================== 00:09:11.792 [2024-10-14T15:26:10.930Z] poller_cost: 391 (cyc), 186 (nsec) 00:09:11.792 00:09:11.792 real 0m1.172s 00:09:11.792 user 0m1.095s 00:09:11.792 sys 0m0.073s 00:09:11.792 17:26:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.792 17:26:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:11.792 ************************************ 00:09:11.792 END TEST thread_poller_perf 00:09:11.792 ************************************ 00:09:11.792 17:26:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:11.792 00:09:11.792 real 0m2.654s 00:09:11.792 user 0m2.349s 00:09:11.792 sys 0m0.316s 00:09:11.792 17:26:10 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.792 17:26:10 thread -- common/autotest_common.sh@10 -- # set +x 00:09:11.792 ************************************ 00:09:11.792 END TEST thread 00:09:11.792 ************************************ 00:09:12.052 17:26:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:12.052 17:26:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:12.052 17:26:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.052 17:26:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.052 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:09:12.052 ************************************ 00:09:12.052 START TEST app_cmdline 00:09:12.052 ************************************ 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:12.052 * Looking for test storage... 00:09:12.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.052 17:26:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.052 --rc genhtml_branch_coverage=1 00:09:12.052 --rc genhtml_function_coverage=1 00:09:12.052 --rc genhtml_legend=1 00:09:12.052 --rc geninfo_all_blocks=1 00:09:12.052 --rc geninfo_unexecuted_blocks=1 00:09:12.052 00:09:12.052 ' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.052 --rc genhtml_branch_coverage=1 00:09:12.052 --rc genhtml_function_coverage=1 00:09:12.052 --rc genhtml_legend=1 00:09:12.052 --rc geninfo_all_blocks=1 00:09:12.052 --rc geninfo_unexecuted_blocks=1 00:09:12.052 00:09:12.052 ' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.052 --rc genhtml_branch_coverage=1 00:09:12.052 --rc genhtml_function_coverage=1 00:09:12.052 --rc genhtml_legend=1 00:09:12.052 --rc geninfo_all_blocks=1 00:09:12.052 --rc geninfo_unexecuted_blocks=1 00:09:12.052 00:09:12.052 ' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.052 --rc genhtml_branch_coverage=1 00:09:12.052 --rc genhtml_function_coverage=1 00:09:12.052 --rc genhtml_legend=1 00:09:12.052 --rc geninfo_all_blocks=1 00:09:12.052 --rc geninfo_unexecuted_blocks=1 00:09:12.052 00:09:12.052 ' 00:09:12.052 17:26:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:12.052 17:26:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=931974 00:09:12.052 17:26:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 931974 00:09:12.052 17:26:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 931974 ']' 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.052 17:26:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.311 [2024-10-14 17:26:11.228709] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:12.311 [2024-10-14 17:26:11.228757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931974 ] 00:09:12.311 [2024-10-14 17:26:11.297259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.311 [2024-10-14 17:26:11.339105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.570 17:26:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.570 17:26:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:12.570 17:26:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:12.829 { 00:09:12.829 "version": "SPDK v25.01-pre git sha1 2a72c3069", 00:09:12.829 "fields": { 00:09:12.829 "major": 25, 00:09:12.829 "minor": 1, 00:09:12.829 "patch": 0, 00:09:12.829 "suffix": "-pre", 00:09:12.829 "commit": "2a72c3069" 00:09:12.829 } 00:09:12.829 } 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:12.829 17:26:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.829 17:26:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:12.830 17:26:11 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.830 request: 00:09:12.830 { 00:09:12.830 "method": "env_dpdk_get_mem_stats", 00:09:12.830 "req_id": 1 00:09:12.830 } 00:09:12.830 Got JSON-RPC error response 00:09:12.830 response: 00:09:12.830 { 00:09:12.830 "code": -32601, 00:09:12.830 "message": "Method not found" 00:09:12.830 } 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.089 17:26:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 931974 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 931974 ']' 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 931974 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.089 17:26:11 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931974 00:09:13.089 17:26:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.089 17:26:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.089 17:26:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931974' 00:09:13.089 killing process with pid 931974 00:09:13.089 17:26:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 931974 00:09:13.089 17:26:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 931974 00:09:13.349 00:09:13.349 real 0m1.335s 00:09:13.349 user 0m1.545s 00:09:13.349 sys 0m0.463s 00:09:13.349 17:26:12 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.349 17:26:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:13.349 ************************************ 00:09:13.349 END TEST app_cmdline 00:09:13.349 ************************************ 00:09:13.349 17:26:12 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:13.349 17:26:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.349 17:26:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.349 17:26:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.349 ************************************ 00:09:13.349 START TEST version 00:09:13.349 ************************************ 00:09:13.349 17:26:12 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:13.349 * Looking for test storage... 00:09:13.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.609 17:26:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.609 17:26:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.609 17:26:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.609 17:26:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.609 17:26:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.609 17:26:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.609 17:26:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.609 17:26:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.609 17:26:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.609 17:26:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.609 17:26:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.609 17:26:12 version -- scripts/common.sh@344 -- # case "$op" in 00:09:13.609 17:26:12 version -- scripts/common.sh@345 -- # : 1 00:09:13.609 17:26:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.609 17:26:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.609 17:26:12 version -- scripts/common.sh@365 -- # decimal 1 00:09:13.609 17:26:12 version -- scripts/common.sh@353 -- # local d=1 00:09:13.609 17:26:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.609 17:26:12 version -- scripts/common.sh@355 -- # echo 1 00:09:13.609 17:26:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.609 17:26:12 version -- scripts/common.sh@366 -- # decimal 2 00:09:13.609 17:26:12 version -- scripts/common.sh@353 -- # local d=2 00:09:13.609 17:26:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.609 17:26:12 version -- scripts/common.sh@355 -- # echo 2 00:09:13.609 17:26:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.609 17:26:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.609 17:26:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.609 17:26:12 version -- scripts/common.sh@368 -- # return 0 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.609 --rc genhtml_branch_coverage=1 00:09:13.609 --rc genhtml_function_coverage=1 00:09:13.609 --rc genhtml_legend=1 00:09:13.609 --rc geninfo_all_blocks=1 00:09:13.609 --rc geninfo_unexecuted_blocks=1 00:09:13.609 00:09:13.609 ' 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.609 --rc genhtml_branch_coverage=1 00:09:13.609 --rc genhtml_function_coverage=1 00:09:13.609 --rc genhtml_legend=1 00:09:13.609 --rc geninfo_all_blocks=1 00:09:13.609 --rc geninfo_unexecuted_blocks=1 00:09:13.609 00:09:13.609 ' 00:09:13.609 17:26:12 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.610 --rc genhtml_branch_coverage=1 00:09:13.610 --rc genhtml_function_coverage=1 00:09:13.610 --rc genhtml_legend=1 00:09:13.610 --rc geninfo_all_blocks=1 00:09:13.610 --rc geninfo_unexecuted_blocks=1 00:09:13.610 00:09:13.610 ' 00:09:13.610 17:26:12 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.610 --rc genhtml_branch_coverage=1 00:09:13.610 --rc genhtml_function_coverage=1 00:09:13.610 --rc genhtml_legend=1 00:09:13.610 --rc geninfo_all_blocks=1 00:09:13.610 --rc geninfo_unexecuted_blocks=1 00:09:13.610 00:09:13.610 ' 00:09:13.610 17:26:12 version -- app/version.sh@17 -- # get_header_version major 00:09:13.610 17:26:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # cut -f2 00:09:13.610 17:26:12 version -- app/version.sh@17 -- # major=25 00:09:13.610 17:26:12 version -- app/version.sh@18 -- # get_header_version minor 00:09:13.610 17:26:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # cut -f2 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.610 17:26:12 version -- app/version.sh@18 -- # minor=1 00:09:13.610 17:26:12 version -- app/version.sh@19 -- # get_header_version patch 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # cut -f2 00:09:13.610 17:26:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.610 17:26:12 version -- app/version.sh@19 -- # patch=0 00:09:13.610 17:26:12 version -- app/version.sh@20 -- # get_header_version suffix 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # cut -f2 00:09:13.610 17:26:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.610 17:26:12 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.610 17:26:12 version -- app/version.sh@20 -- # suffix=-pre 00:09:13.610 17:26:12 version -- app/version.sh@22 -- # version=25.1 00:09:13.610 17:26:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:13.610 17:26:12 version -- app/version.sh@28 -- # version=25.1rc0 00:09:13.610 17:26:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:13.610 17:26:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:13.610 17:26:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:13.610 17:26:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:13.610 00:09:13.610 real 0m0.242s 00:09:13.610 user 0m0.133s 00:09:13.610 sys 0m0.145s 00:09:13.610 17:26:12 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.610 17:26:12 version -- common/autotest_common.sh@10 -- # set +x 00:09:13.610 ************************************ 00:09:13.610 END TEST version 00:09:13.610 ************************************ 00:09:13.610 17:26:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:13.610 17:26:12 -- spdk/autotest.sh@194 -- # uname -s 00:09:13.610 17:26:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:13.610 17:26:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.610 17:26:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.610 17:26:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:13.610 17:26:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.610 17:26:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.610 17:26:12 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:13.610 17:26:12 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:13.610 17:26:12 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.610 17:26:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.610 17:26:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.610 17:26:12 -- common/autotest_common.sh@10 -- # set +x 00:09:13.869 ************************************ 00:09:13.869 START TEST nvmf_tcp 00:09:13.869 ************************************ 00:09:13.869 17:26:12 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.869 * Looking for test storage... 00:09:13.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:13.869 17:26:12 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.869 17:26:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.869 17:26:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.869 17:26:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.869 17:26:12 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.869 17:26:12 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.869 17:26:12 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.870 17:26:12 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.870 --rc genhtml_branch_coverage=1 00:09:13.870 --rc genhtml_function_coverage=1 00:09:13.870 --rc genhtml_legend=1 00:09:13.870 --rc geninfo_all_blocks=1 00:09:13.870 --rc geninfo_unexecuted_blocks=1 00:09:13.870 00:09:13.870 ' 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.870 --rc genhtml_branch_coverage=1 00:09:13.870 --rc genhtml_function_coverage=1 00:09:13.870 --rc genhtml_legend=1 00:09:13.870 --rc geninfo_all_blocks=1 00:09:13.870 --rc geninfo_unexecuted_blocks=1 00:09:13.870 00:09:13.870 ' 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.870 --rc genhtml_branch_coverage=1 00:09:13.870 --rc genhtml_function_coverage=1 00:09:13.870 --rc genhtml_legend=1 00:09:13.870 --rc geninfo_all_blocks=1 00:09:13.870 --rc geninfo_unexecuted_blocks=1 00:09:13.870 00:09:13.870 ' 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.870 --rc genhtml_branch_coverage=1 00:09:13.870 --rc genhtml_function_coverage=1 00:09:13.870 --rc genhtml_legend=1 00:09:13.870 --rc geninfo_all_blocks=1 00:09:13.870 --rc geninfo_unexecuted_blocks=1 00:09:13.870 00:09:13.870 ' 00:09:13.870 17:26:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:13.870 17:26:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:13.870 17:26:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.870 17:26:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.870 ************************************ 00:09:13.870 START TEST nvmf_target_core 00:09:13.870 ************************************ 00:09:13.870 17:26:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:14.130 * Looking for test storage... 00:09:14.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.130 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.131 --rc genhtml_branch_coverage=1 00:09:14.131 --rc genhtml_function_coverage=1 00:09:14.131 --rc genhtml_legend=1 00:09:14.131 --rc geninfo_all_blocks=1 00:09:14.131 --rc geninfo_unexecuted_blocks=1 00:09:14.131 00:09:14.131 ' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.131 --rc genhtml_branch_coverage=1 00:09:14.131 --rc genhtml_function_coverage=1 00:09:14.131 --rc genhtml_legend=1 00:09:14.131 --rc geninfo_all_blocks=1 00:09:14.131 --rc geninfo_unexecuted_blocks=1 00:09:14.131 00:09:14.131 ' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.131 --rc genhtml_branch_coverage=1 00:09:14.131 --rc genhtml_function_coverage=1 00:09:14.131 --rc genhtml_legend=1 00:09:14.131 --rc geninfo_all_blocks=1 00:09:14.131 --rc geninfo_unexecuted_blocks=1 00:09:14.131 00:09:14.131 ' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.131 --rc genhtml_branch_coverage=1 00:09:14.131 --rc genhtml_function_coverage=1 00:09:14.131 --rc genhtml_legend=1 00:09:14.131 --rc geninfo_all_blocks=1 00:09:14.131 --rc geninfo_unexecuted_blocks=1 00:09:14.131 00:09:14.131 ' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.131 17:26:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.132 ************************************ 00:09:14.132 START TEST nvmf_abort 00:09:14.132 ************************************ 00:09:14.132 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:14.393 * Looking for test storage... 00:09:14.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.393 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.394 --rc genhtml_branch_coverage=1 00:09:14.394 --rc genhtml_function_coverage=1 00:09:14.394 --rc genhtml_legend=1 00:09:14.394 --rc geninfo_all_blocks=1 00:09:14.394 --rc geninfo_unexecuted_blocks=1 00:09:14.394 00:09:14.394 ' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.394 --rc genhtml_branch_coverage=1 00:09:14.394 --rc genhtml_function_coverage=1 00:09:14.394 --rc genhtml_legend=1 00:09:14.394 --rc geninfo_all_blocks=1 00:09:14.394 --rc geninfo_unexecuted_blocks=1 00:09:14.394 00:09:14.394 ' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.394 --rc genhtml_branch_coverage=1 00:09:14.394 --rc genhtml_function_coverage=1 00:09:14.394 --rc genhtml_legend=1 00:09:14.394 --rc geninfo_all_blocks=1 00:09:14.394 --rc geninfo_unexecuted_blocks=1 00:09:14.394 00:09:14.394 ' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.394 --rc genhtml_branch_coverage=1 00:09:14.394 --rc genhtml_function_coverage=1 00:09:14.394 --rc genhtml_legend=1 00:09:14.394 --rc geninfo_all_blocks=1 00:09:14.394 --rc geninfo_unexecuted_blocks=1 00:09:14.394 00:09:14.394 ' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.394 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.395 17:26:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.977 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.977 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.977 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.977 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:09:20.977 00:09:20.977 --- 10.0.0.2 ping statistics --- 00:09:20.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.977 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:09:20.977 00:09:20.977 --- 10.0.0.1 ping statistics --- 00:09:20.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.977 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.977 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=935652 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 935652 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 935652 ']' 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 [2024-10-14 17:26:19.492377] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:20.978 [2024-10-14 17:26:19.492425] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.978 [2024-10-14 17:26:19.565537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.978 [2024-10-14 17:26:19.608676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.978 [2024-10-14 17:26:19.608711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.978 [2024-10-14 17:26:19.608719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.978 [2024-10-14 17:26:19.608725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.978 [2024-10-14 17:26:19.608730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.978 [2024-10-14 17:26:19.610158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.978 [2024-10-14 17:26:19.610243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.978 [2024-10-14 17:26:19.610244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 [2024-10-14 17:26:19.754515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 Malloc0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 Delay0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 [2024-10-14 17:26:19.828059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.978 17:26:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:20.978 [2024-10-14 17:26:19.955295] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:22.882 [2024-10-14 17:26:21.983573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9e50 is same with the state(6) to be set 00:09:22.882 Initializing NVMe Controllers 00:09:22.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:22.882 controller IO queue size 128 less than required 00:09:22.882 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:22.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:22.882 Initialization complete. Launching workers. 00:09:22.882 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37617 00:09:22.882 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37678, failed to submit 62 00:09:22.882 success 37621, unsuccessful 57, failed 0 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:22.882 17:26:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:22.882 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:22.882 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.882 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:22.882 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.882 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.882 rmmod nvme_tcp 00:09:22.882 rmmod nvme_fabrics 00:09:23.141 rmmod nvme_keyring 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 935652 ']' 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 935652 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 935652 ']' 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 935652 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935652 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935652' 00:09:23.141 killing process with pid 935652 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 935652 00:09:23.141 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 935652 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.401 17:26:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.306 00:09:25.306 real 0m11.147s 00:09:25.306 user 0m11.402s 00:09:25.306 sys 0m5.402s 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.306 ************************************ 00:09:25.306 END TEST nvmf_abort 00:09:25.306 ************************************ 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.306 ************************************ 00:09:25.306 START TEST nvmf_ns_hotplug_stress 00:09:25.306 ************************************ 00:09:25.306 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.566 * Looking for test storage... 00:09:25.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.566 --rc genhtml_branch_coverage=1 00:09:25.566 --rc genhtml_function_coverage=1 00:09:25.566 --rc genhtml_legend=1 00:09:25.566 --rc geninfo_all_blocks=1 00:09:25.566 --rc geninfo_unexecuted_blocks=1 00:09:25.566 00:09:25.566 ' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.566 --rc genhtml_branch_coverage=1 00:09:25.566 --rc genhtml_function_coverage=1 00:09:25.566 --rc genhtml_legend=1 00:09:25.566 --rc geninfo_all_blocks=1 00:09:25.566 --rc geninfo_unexecuted_blocks=1 00:09:25.566 00:09:25.566 ' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.566 --rc genhtml_branch_coverage=1 00:09:25.566 --rc genhtml_function_coverage=1 00:09:25.566 --rc genhtml_legend=1 00:09:25.566 --rc geninfo_all_blocks=1 00:09:25.566 --rc geninfo_unexecuted_blocks=1 00:09:25.566 00:09:25.566 ' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.566 --rc genhtml_branch_coverage=1 00:09:25.566 --rc genhtml_function_coverage=1 00:09:25.566 --rc genhtml_legend=1 00:09:25.566 --rc geninfo_all_blocks=1 00:09:25.566 --rc geninfo_unexecuted_blocks=1 00:09:25.566 00:09:25.566 ' 00:09:25.566 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.567 17:26:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.140 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.140 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.140 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.140 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.140 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:09:32.141 00:09:32.141 --- 10.0.0.2 ping statistics --- 00:09:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.141 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:32.141 00:09:32.141 --- 10.0.0.1 ping statistics --- 00:09:32.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.141 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=939673 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 939673 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 939673 ']' 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.141 [2024-10-14 17:26:30.688228] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:09:32.141 [2024-10-14 17:26:30.688277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.141 [2024-10-14 17:26:30.745695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.141 [2024-10-14 17:26:30.789519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.141 [2024-10-14 17:26:30.789553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.141 [2024-10-14 17:26:30.789560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.141 [2024-10-14 17:26:30.789566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.141 [2024-10-14 17:26:30.789574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.141 [2024-10-14 17:26:30.793619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.141 [2024-10-14 17:26:30.793721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.141 [2024-10-14 17:26:30.793721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:32.141 17:26:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.141 [2024-10-14 17:26:31.098180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.141 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.400 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.400 [2024-10-14 17:26:31.507661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.400 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.658 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:32.917 Malloc0 00:09:32.917 17:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.176 Delay0 00:09:33.176 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.434 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:33.434 NULL1 00:09:33.434 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:33.694 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=940159 00:09:33.694 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:33.694 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:33.694 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.952 17:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.211 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:34.211 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:34.211 true 00:09:34.469 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:34.469 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.469 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.728 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:34.728 17:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:34.987 true 00:09:34.987 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:34.987 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.245 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.504 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:35.504 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:35.504 true 00:09:35.763 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:35.763 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.763 17:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.022 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:36.022 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:36.281 true 00:09:36.281 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:36.281 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.540 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.799 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:36.799 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:36.799 true 00:09:37.058 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:37.058 17:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.058 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.317 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:37.317 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:37.576 true 00:09:37.576 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:37.576 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.835 17:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.093 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:38.093 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:38.093 true 00:09:38.352 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:38.352 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.352 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.610 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:38.610 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:38.869 true 00:09:38.869 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:38.869 17:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.127 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.385 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:39.385 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:39.385 true 00:09:39.385 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:39.385 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.643 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.901 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:39.901 17:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:40.159 true 00:09:40.159 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:40.159 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.418 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.676 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:40.676 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:40.676 true 00:09:40.676 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:40.676 17:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.935 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.194 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:41.194 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:41.453 true 00:09:41.453 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:41.453 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.711 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.970 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:41.971 17:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:41.971 true 00:09:41.971 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:41.971 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.292 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.599 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:42.599 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:42.599 true 00:09:42.858 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:42.858 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.858 17:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.117 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:43.117 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:43.376 true 00:09:43.376 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:43.376 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.634 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.893 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:43.893 17:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:43.893 true 00:09:44.151 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:44.151 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.151 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.409 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:44.409 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:44.668 true 00:09:44.668 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:44.668 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.926 17:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.184 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:45.184 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:45.184 true 00:09:45.442 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:45.442 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.442 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.701 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:45.701 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:45.959 true 00:09:45.959 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:45.959 17:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.218 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.476 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:46.476 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:46.476 true 00:09:46.476 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:46.476 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.735 17:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.993 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:46.993 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:47.251 true 00:09:47.251 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:47.251 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.509 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.767 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:47.767 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:47.767 true 00:09:47.767 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:47.767 17:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.026 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.284 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:48.284 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:48.543 true 00:09:48.543 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:48.543 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.801 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.059 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:49.059 17:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:49.059 true 00:09:49.059 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:49.059 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.318 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.577 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:49.577 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:49.835 true 00:09:49.835 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:49.835 17:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.094 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.352 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:50.352 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:50.352 true 00:09:50.352 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:50.352 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.611 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.870 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:50.870 17:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:51.129 true 00:09:51.129 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:51.129 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.388 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.647 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:51.647 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:51.647 true 00:09:51.647 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:51.647 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.906 17:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.164 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:52.164 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:52.422 true 00:09:52.422 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:52.422 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.681 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.681 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:52.681 17:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:52.940 true 00:09:52.940 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:52.940 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.199 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.457 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:53.457 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:53.717 true 00:09:53.717 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:53.717 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.975 17:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.975 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:53.975 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:54.234 true 00:09:54.234 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:54.234 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.492 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.751 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:54.751 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:55.010 true 00:09:55.010 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:55.010 17:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.269 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.269 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:55.269 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:55.527 true 00:09:55.527 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:55.527 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.786 17:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.045 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:56.045 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:56.304 true 00:09:56.304 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:56.304 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.562 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.562 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:56.562 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:56.822 true 00:09:56.822 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:56.822 17:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.080 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.339 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:57.339 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:57.598 true 00:09:57.598 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:57.598 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.856 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.856 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:57.856 17:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:58.114 true 00:09:58.114 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:58.114 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.373 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.632 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:58.632 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:58.890 true 00:09:58.890 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:58.890 17:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.151 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.151 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:59.151 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:59.409 true 00:09:59.409 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:09:59.409 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.668 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.928 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:59.928 17:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:00.186 true 00:10:00.186 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:00.186 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.445 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.445 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:00.445 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:00.703 true 00:10:00.703 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:00.703 17:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.962 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.220 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:01.220 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:01.479 true 00:10:01.479 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:01.479 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.738 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.738 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:01.738 17:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:01.996 true 00:10:01.996 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:01.996 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.255 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.513 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:02.513 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:02.772 true 00:10:02.772 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:02.772 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.030 17:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.030 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:03.030 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:03.288 true 00:10:03.288 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:03.288 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.546 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.804 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:03.804 17:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:04.063 true 00:10:04.063 Initializing NVMe Controllers 00:10:04.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.063 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:04.063 Controller IO queue size 128, less than required. 00:10:04.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.063 WARNING: Some requested NVMe devices were skipped 00:10:04.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:04.063 Initialization complete. Launching workers. 00:10:04.063 ======================================================== 00:10:04.064 Latency(us) 00:10:04.064 Device Information : IOPS MiB/s Average min max 00:10:04.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27431.90 13.39 4666.01 1298.84 8562.55 00:10:04.064 ======================================================== 00:10:04.064 Total : 27431.90 13.39 4666.01 1298.84 8562.55 00:10:04.064 00:10:04.064 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940159 00:10:04.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (940159) - No such process 00:10:04.064 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 940159 00:10:04.064 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.064 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.322 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:04.322 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:04.322 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:04.322 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.322 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:04.581 null0 00:10:04.581 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.581 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.581 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:04.841 null1 00:10:04.841 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:04.841 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:04.841 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:04.841 null2 00:10:05.100 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.100 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.100 17:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:05.100 null3 00:10:05.100 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.100 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.100 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:05.360 null4 00:10:05.360 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.360 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.360 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:05.619 null5 00:10:05.619 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.619 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.619 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:05.619 null6 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:05.879 null7 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 945726 945727 945729 945731 945733 945735 945737 945739 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.879 17:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.139 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.399 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.659 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.918 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.919 17:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.919 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.178 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.179 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.439 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.699 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.958 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.958 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.958 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.958 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.958 17:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.958 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.217 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.478 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.479 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.766 17:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.048 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.306 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.306 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.306 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.307 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.565 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.824 17:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.082 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.082 rmmod nvme_tcp 00:10:10.082 rmmod nvme_fabrics 00:10:10.082 rmmod nvme_keyring 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 939673 ']' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 939673 ']' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 939673' 00:10:10.342 killing process with pid 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 939673 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.342 17:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.878 00:10:12.878 real 0m47.092s 00:10:12.878 user 3m20.332s 00:10:12.878 sys 0m17.437s 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.878 ************************************ 00:10:12.878 END TEST nvmf_ns_hotplug_stress 00:10:12.878 ************************************ 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.878 ************************************ 00:10:12.878 START TEST nvmf_delete_subsystem 00:10:12.878 ************************************ 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:12.878 * Looking for test storage... 00:10:12.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:12.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.878 --rc genhtml_branch_coverage=1 00:10:12.878 --rc genhtml_function_coverage=1 00:10:12.878 --rc genhtml_legend=1 00:10:12.878 --rc geninfo_all_blocks=1 00:10:12.878 --rc geninfo_unexecuted_blocks=1 00:10:12.878 00:10:12.878 ' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:12.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.878 --rc genhtml_branch_coverage=1 00:10:12.878 --rc genhtml_function_coverage=1 00:10:12.878 --rc genhtml_legend=1 00:10:12.878 --rc geninfo_all_blocks=1 00:10:12.878 --rc geninfo_unexecuted_blocks=1 00:10:12.878 00:10:12.878 ' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:12.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.878 --rc genhtml_branch_coverage=1 00:10:12.878 --rc genhtml_function_coverage=1 00:10:12.878 --rc genhtml_legend=1 00:10:12.878 --rc geninfo_all_blocks=1 00:10:12.878 --rc geninfo_unexecuted_blocks=1 00:10:12.878 00:10:12.878 ' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:12.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.878 --rc genhtml_branch_coverage=1 00:10:12.878 --rc genhtml_function_coverage=1 00:10:12.878 --rc genhtml_legend=1 00:10:12.878 --rc geninfo_all_blocks=1 00:10:12.878 --rc geninfo_unexecuted_blocks=1 00:10:12.878 00:10:12.878 ' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.878 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.879 17:27:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:19.450 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:19.450 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.450 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:19.451 Found net devices under 0000:86:00.0: cvl_0_0 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:19.451 Found net devices under 0000:86:00.1: cvl_0_1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:10:19.451 00:10:19.451 --- 10.0.0.2 ping statistics --- 00:10:19.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.451 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:10:19.451 00:10:19.451 --- 10.0.0.1 ping statistics --- 00:10:19.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.451 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:19.451 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=950515 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 950515 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 950515 ']' 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.452 17:27:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 [2024-10-14 17:27:17.877503] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:10:19.452 [2024-10-14 17:27:17.877550] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.452 [2024-10-14 17:27:17.951125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.452 [2024-10-14 17:27:17.990130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.452 [2024-10-14 17:27:17.990166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.452 [2024-10-14 17:27:17.990173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.452 [2024-10-14 17:27:17.990179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.452 [2024-10-14 17:27:17.990183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.452 [2024-10-14 17:27:17.991401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.452 [2024-10-14 17:27:17.991402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 [2024-10-14 17:27:18.139305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 [2024-10-14 17:27:18.159523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 NULL1 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 Delay0 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=950631 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:19.452 17:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:19.452 [2024-10-14 17:27:18.261220] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:21.358 17:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.358 17:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.358 17:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 [2024-10-14 17:27:20.465973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1570 is same with the state(6) to be set 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Read completed with error (sct=0, sc=8) 00:10:21.358 starting I/O failed: -6 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.358 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 starting I/O failed: -6 00:10:21.359 starting I/O failed: -6 00:10:21.359 starting I/O failed: -6 00:10:21.359 starting I/O failed: -6 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Write completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 Read completed with error (sct=0, sc=8) 00:10:21.359 starting I/O failed: -6 00:10:21.359 [2024-10-14 17:27:20.471998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f521c000c10 is same with the state(6) to be set 00:10:22.737 [2024-10-14 17:27:21.439477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d2a70 is same with the state(6) to be set 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 [2024-10-14 17:27:21.469210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1750 is same with the state(6) to be set 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 [2024-10-14 17:27:21.469380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1390 is same with the state(6) to be set 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 [2024-10-14 17:27:21.471823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f521c00cff0 is same with the state(6) to be set 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Write completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 Read completed with error (sct=0, sc=8) 00:10:22.737 [2024-10-14 17:27:21.472446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f521c00d790 is same with the state(6) to be set 00:10:22.737 Initializing NVMe Controllers 00:10:22.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.737 Controller IO queue size 128, less than required. 00:10:22.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:22.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:22.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:22.738 Initialization complete. Launching workers. 00:10:22.738 ======================================================== 00:10:22.738 Latency(us) 00:10:22.738 Device Information : IOPS MiB/s Average min max 00:10:22.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.28 0.08 896974.42 320.51 1005858.69 00:10:22.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 175.75 0.09 972673.42 283.66 2001327.97 00:10:22.738 ======================================================== 00:10:22.738 Total : 345.03 0.17 935533.94 283.66 2001327.97 00:10:22.738 00:10:22.738 [2024-10-14 17:27:21.472987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d2a70 (9): Bad file descriptor 00:10:22.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:22.738 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.738 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:22.738 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 950631 00:10:22.738 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 950631 00:10:22.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (950631) - No such process 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 950631 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 950631 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 950631 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.997 17:27:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.997 [2024-10-14 17:27:21.999169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=951235 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:22.997 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:22.997 [2024-10-14 17:27:22.082650] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:23.565 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:23.565 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:23.565 17:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.135 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.135 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:24.135 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.395 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.395 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:24.395 17:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.963 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.963 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:24.963 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:25.530 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:25.530 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:25.530 17:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:26.098 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:26.098 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:26.098 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:26.098 Initializing NVMe Controllers 00:10:26.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:26.098 Controller IO queue size 128, less than required. 00:10:26.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:26.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:26.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:26.098 Initialization complete. Launching workers. 00:10:26.098 ======================================================== 00:10:26.098 Latency(us) 00:10:26.098 Device Information : IOPS MiB/s Average min max 00:10:26.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002536.52 1000153.77 1009536.09 00:10:26.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003661.76 1000145.16 1009819.04 00:10:26.098 ======================================================== 00:10:26.098 Total : 256.00 0.12 1003099.14 1000145.16 1009819.04 00:10:26.098 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951235 00:10:26.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (951235) - No such process 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 951235 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.667 rmmod nvme_tcp 00:10:26.667 rmmod nvme_fabrics 00:10:26.667 rmmod nvme_keyring 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 950515 ']' 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 950515 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 950515 ']' 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 950515 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950515 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950515' 00:10:26.667 killing process with pid 950515 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 950515 00:10:26.667 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 950515 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.927 17:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.833 00:10:28.833 real 0m16.309s 00:10:28.833 user 0m29.450s 00:10:28.833 sys 0m5.493s 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.833 ************************************ 00:10:28.833 END TEST nvmf_delete_subsystem 00:10:28.833 ************************************ 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.833 17:27:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.092 ************************************ 00:10:29.092 START TEST nvmf_host_management 00:10:29.092 ************************************ 00:10:29.092 17:27:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:29.092 * Looking for test storage... 00:10:29.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:29.092 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.093 --rc genhtml_branch_coverage=1 00:10:29.093 --rc genhtml_function_coverage=1 00:10:29.093 --rc genhtml_legend=1 00:10:29.093 --rc geninfo_all_blocks=1 00:10:29.093 --rc geninfo_unexecuted_blocks=1 00:10:29.093 00:10:29.093 ' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.093 --rc genhtml_branch_coverage=1 00:10:29.093 --rc genhtml_function_coverage=1 00:10:29.093 --rc genhtml_legend=1 00:10:29.093 --rc geninfo_all_blocks=1 00:10:29.093 --rc geninfo_unexecuted_blocks=1 00:10:29.093 00:10:29.093 ' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.093 --rc genhtml_branch_coverage=1 00:10:29.093 --rc genhtml_function_coverage=1 00:10:29.093 --rc genhtml_legend=1 00:10:29.093 --rc geninfo_all_blocks=1 00:10:29.093 --rc geninfo_unexecuted_blocks=1 00:10:29.093 00:10:29.093 ' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.093 --rc genhtml_branch_coverage=1 00:10:29.093 --rc genhtml_function_coverage=1 00:10:29.093 --rc genhtml_legend=1 00:10:29.093 --rc geninfo_all_blocks=1 00:10:29.093 --rc geninfo_unexecuted_blocks=1 00:10:29.093 00:10:29.093 ' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.093 17:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.665 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:35.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:35.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:35.666 Found net devices under 0000:86:00.0: cvl_0_0 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:35.666 Found net devices under 0000:86:00.1: cvl_0_1 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.666 17:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:10:35.666 00:10:35.666 --- 10.0.0.2 ping statistics --- 00:10:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.666 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:10:35.666 00:10:35.666 --- 10.0.0.1 ping statistics --- 00:10:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.666 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=955462 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 955462 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 955462 ']' 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.666 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 [2024-10-14 17:27:34.301620] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:10:35.667 [2024-10-14 17:27:34.301670] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.667 [2024-10-14 17:27:34.373804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.667 [2024-10-14 17:27:34.415015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.667 [2024-10-14 17:27:34.415054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.667 [2024-10-14 17:27:34.415061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.667 [2024-10-14 17:27:34.415068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.667 [2024-10-14 17:27:34.415073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.667 [2024-10-14 17:27:34.416678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.667 [2024-10-14 17:27:34.416793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.667 [2024-10-14 17:27:34.416889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.667 [2024-10-14 17:27:34.416890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 [2024-10-14 17:27:34.561508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 Malloc0 00:10:35.667 [2024-10-14 17:27:34.644618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=955510 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 955510 /var/tmp/bdevperf.sock 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 955510 ']' 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:35.667 { 00:10:35.667 "params": { 00:10:35.667 "name": "Nvme$subsystem", 00:10:35.667 "trtype": "$TEST_TRANSPORT", 00:10:35.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:35.667 "adrfam": "ipv4", 00:10:35.667 "trsvcid": "$NVMF_PORT", 00:10:35.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:35.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:35.667 "hdgst": ${hdgst:-false}, 00:10:35.667 "ddgst": ${ddgst:-false} 00:10:35.667 }, 00:10:35.667 "method": "bdev_nvme_attach_controller" 00:10:35.667 } 00:10:35.667 EOF 00:10:35.667 )") 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:35.667 17:27:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:35.667 "params": { 00:10:35.667 "name": "Nvme0", 00:10:35.667 "trtype": "tcp", 00:10:35.667 "traddr": "10.0.0.2", 00:10:35.667 "adrfam": "ipv4", 00:10:35.667 "trsvcid": "4420", 00:10:35.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:35.667 "hdgst": false, 00:10:35.667 "ddgst": false 00:10:35.667 }, 00:10:35.667 "method": "bdev_nvme_attach_controller" 00:10:35.667 }' 00:10:35.667 [2024-10-14 17:27:34.739707] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:10:35.667 [2024-10-14 17:27:34.739751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955510 ] 00:10:35.926 [2024-10-14 17:27:34.808341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.926 [2024-10-14 17:27:34.849321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.926 Running I/O for 10 seconds... 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:35.926 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=100 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 100 -ge 100 ']' 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.187 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.187 [2024-10-14 17:27:35.118059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99d2c0 is same with the state(6) to be set 00:10:36.187 [2024-10-14 17:27:35.118095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99d2c0 is same with the state(6) to be set 00:10:36.187 [2024-10-14 17:27:35.119323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.187 [2024-10-14 17:27:35.119888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.187 [2024-10-14 17:27:35.119894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.119989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.119995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:36.188 [2024-10-14 17:27:35.120304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.120366] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x202d850 was disconnected and freed. reset controller. 00:10:36.188 [2024-10-14 17:27:35.121252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:36.188 task offset: 24576 on job bdev=Nvme0n1 fails 00:10:36.188 00:10:36.188 Latency(us) 00:10:36.188 [2024-10-14T15:27:35.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.188 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:36.188 Job: Nvme0n1 ended in about 0.11 seconds with error 00:10:36.188 Verification LBA range: start 0x0 length 0x400 00:10:36.188 Nvme0n1 : 0.11 1754.63 109.66 584.88 0.00 25235.47 1614.99 26588.89 00:10:36.188 [2024-10-14T15:27:35.326Z] =================================================================================================================== 00:10:36.188 [2024-10-14T15:27:35.326Z] Total : 1754.63 109.66 584.88 0.00 25235.47 1614.99 26588.89 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:36.188 [2024-10-14 17:27:35.123613] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:36.188 [2024-10-14 17:27:35.123636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e145c0 (9): Bad file descriptor 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.188 [2024-10-14 17:27:35.126090] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:10:36.188 [2024-10-14 17:27:35.126185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:10:36.188 [2024-10-14 17:27:35.126208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.188 [2024-10-14 17:27:35.126223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:10:36.188 [2024-10-14 17:27:35.126232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:10:36.188 [2024-10-14 17:27:35.126239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:10:36.188 [2024-10-14 17:27:35.126245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e145c0 00:10:36.188 [2024-10-14 17:27:35.126263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e145c0 (9): Bad file descriptor 00:10:36.188 [2024-10-14 17:27:35.126274] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:10:36.188 [2024-10-14 17:27:35.126280] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:10:36.188 [2024-10-14 17:27:35.126288] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:10:36.188 [2024-10-14 17:27:35.126300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.188 17:27:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 955510 00:10:37.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (955510) - No such process 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:37.126 { 00:10:37.126 "params": { 00:10:37.126 "name": "Nvme$subsystem", 00:10:37.126 "trtype": "$TEST_TRANSPORT", 00:10:37.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.126 "adrfam": "ipv4", 00:10:37.126 "trsvcid": "$NVMF_PORT", 00:10:37.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.126 "hdgst": ${hdgst:-false}, 00:10:37.126 "ddgst": ${ddgst:-false} 00:10:37.126 }, 00:10:37.126 "method": "bdev_nvme_attach_controller" 00:10:37.126 } 00:10:37.126 EOF 00:10:37.126 )") 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:37.126 17:27:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:37.126 "params": { 00:10:37.126 "name": "Nvme0", 00:10:37.126 "trtype": "tcp", 00:10:37.126 "traddr": "10.0.0.2", 00:10:37.126 "adrfam": "ipv4", 00:10:37.126 "trsvcid": "4420", 00:10:37.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:37.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:37.126 "hdgst": false, 00:10:37.126 "ddgst": false 00:10:37.126 }, 00:10:37.126 "method": "bdev_nvme_attach_controller" 00:10:37.126 }' 00:10:37.126 [2024-10-14 17:27:36.188447] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:10:37.126 [2024-10-14 17:27:36.188491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955755 ] 00:10:37.126 [2024-10-14 17:27:36.258281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.385 [2024-10-14 17:27:36.296927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.385 Running I/O for 1 seconds... 00:10:38.765 2048.00 IOPS, 128.00 MiB/s 00:10:38.765 Latency(us) 00:10:38.765 [2024-10-14T15:27:37.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.765 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:38.765 Verification LBA range: start 0x0 length 0x400 00:10:38.765 Nvme0n1 : 1.02 2074.52 129.66 0.00 0.00 30369.39 6054.28 26464.06 00:10:38.765 [2024-10-14T15:27:37.903Z] =================================================================================================================== 00:10:38.765 [2024-10-14T15:27:37.903Z] Total : 2074.52 129.66 0.00 0.00 30369.39 6054.28 26464.06 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.765 rmmod nvme_tcp 00:10:38.765 rmmod nvme_fabrics 00:10:38.765 rmmod nvme_keyring 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 955462 ']' 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 955462 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 955462 ']' 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 955462 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 955462 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 955462' 00:10:38.765 killing process with pid 955462 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 955462 00:10:38.765 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 955462 00:10:39.035 [2024-10-14 17:27:37.969127] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:10:39.035 17:27:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:39.035 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.035 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.035 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.035 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.035 17:27:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.943 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.943 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:40.943 00:10:40.943 real 0m12.083s 00:10:40.943 user 0m17.800s 00:10:40.943 sys 0m5.566s 00:10:40.943 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.943 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:40.943 ************************************ 00:10:40.943 END TEST nvmf_host_management 00:10:40.943 ************************************ 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.203 ************************************ 00:10:41.203 START TEST nvmf_lvol 00:10:41.203 ************************************ 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:41.203 * Looking for test storage... 00:10:41.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.203 --rc genhtml_branch_coverage=1 00:10:41.203 --rc genhtml_function_coverage=1 00:10:41.203 --rc genhtml_legend=1 00:10:41.203 --rc geninfo_all_blocks=1 00:10:41.203 --rc geninfo_unexecuted_blocks=1 00:10:41.203 00:10:41.203 ' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.203 --rc genhtml_branch_coverage=1 00:10:41.203 --rc genhtml_function_coverage=1 00:10:41.203 --rc genhtml_legend=1 00:10:41.203 --rc geninfo_all_blocks=1 00:10:41.203 --rc geninfo_unexecuted_blocks=1 00:10:41.203 00:10:41.203 ' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.203 --rc genhtml_branch_coverage=1 00:10:41.203 --rc genhtml_function_coverage=1 00:10:41.203 --rc genhtml_legend=1 00:10:41.203 --rc geninfo_all_blocks=1 00:10:41.203 --rc geninfo_unexecuted_blocks=1 00:10:41.203 00:10:41.203 ' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.203 --rc genhtml_branch_coverage=1 00:10:41.203 --rc genhtml_function_coverage=1 00:10:41.203 --rc genhtml_legend=1 00:10:41.203 --rc geninfo_all_blocks=1 00:10:41.203 --rc geninfo_unexecuted_blocks=1 00:10:41.203 00:10:41.203 ' 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:41.203 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.204 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.464 17:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:48.039 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:48.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:48.039 Found net devices under 0000:86:00.0: cvl_0_0 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:48.039 Found net devices under 0000:86:00.1: cvl_0_1 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.039 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:10:48.040 00:10:48.040 --- 10.0.0.2 ping statistics --- 00:10:48.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.040 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:10:48.040 00:10:48.040 --- 10.0.0.1 ping statistics --- 00:10:48.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.040 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=959731 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 959731 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 959731 ']' 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 [2024-10-14 17:27:46.492443] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:10:48.040 [2024-10-14 17:27:46.492489] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.040 [2024-10-14 17:27:46.564841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.040 [2024-10-14 17:27:46.608709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.040 [2024-10-14 17:27:46.608743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.040 [2024-10-14 17:27:46.608750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.040 [2024-10-14 17:27:46.608756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.040 [2024-10-14 17:27:46.608762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.040 [2024-10-14 17:27:46.610162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.040 [2024-10-14 17:27:46.610271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.040 [2024-10-14 17:27:46.610273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:48.040 [2024-10-14 17:27:46.903262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.040 17:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.040 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:48.040 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:48.299 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:48.299 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:48.651 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:48.651 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c147779c-bde0-4472-9f17-9050d7ce36b4 00:10:48.651 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c147779c-bde0-4472-9f17-9050d7ce36b4 lvol 20 00:10:48.971 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0aea99ca-633c-4bfa-9799-3b671c15baf6 00:10:48.971 17:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:49.250 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0aea99ca-633c-4bfa-9799-3b671c15baf6 00:10:49.250 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:49.509 [2024-10-14 17:27:48.511398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.509 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:49.769 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=960027 00:10:49.769 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:49.769 17:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:50.706 17:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0aea99ca-633c-4bfa-9799-3b671c15baf6 MY_SNAPSHOT 00:10:50.964 17:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=82000c9d-3710-4a60-99b2-3d46d257cb3a 00:10:50.965 17:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0aea99ca-633c-4bfa-9799-3b671c15baf6 30 00:10:51.223 17:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 82000c9d-3710-4a60-99b2-3d46d257cb3a MY_CLONE 00:10:51.482 17:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=43e5a214-7d3d-4299-b074-a7df4730361f 00:10:51.482 17:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 43e5a214-7d3d-4299-b074-a7df4730361f 00:10:52.050 17:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 960027 00:11:00.171 Initializing NVMe Controllers 00:11:00.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:00.171 Controller IO queue size 128, less than required. 00:11:00.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:00.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:00.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:00.171 Initialization complete. Launching workers. 00:11:00.171 ======================================================== 00:11:00.171 Latency(us) 00:11:00.171 Device Information : IOPS MiB/s Average min max 00:11:00.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12380.00 48.36 10344.29 1265.23 65680.28 00:11:00.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12245.00 47.83 10455.47 3511.17 60408.51 00:11:00.171 ======================================================== 00:11:00.171 Total : 24625.00 96.19 10399.57 1265.23 65680.28 00:11:00.171 00:11:00.171 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.430 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0aea99ca-633c-4bfa-9799-3b671c15baf6 00:11:00.430 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c147779c-bde0-4472-9f17-9050d7ce36b4 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.689 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.689 rmmod nvme_tcp 00:11:00.689 rmmod nvme_fabrics 00:11:00.689 rmmod nvme_keyring 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 959731 ']' 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 959731 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 959731 ']' 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 959731 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 959731 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 959731' 00:11:00.948 killing process with pid 959731 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 959731 00:11:00.948 17:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 959731 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.207 17:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.113 00:11:03.113 real 0m22.031s 00:11:03.113 user 1m3.065s 00:11:03.113 sys 0m7.658s 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:03.113 ************************************ 00:11:03.113 END TEST nvmf_lvol 00:11:03.113 ************************************ 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.113 ************************************ 00:11:03.113 START TEST nvmf_lvs_grow 00:11:03.113 ************************************ 00:11:03.113 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:03.372 * Looking for test storage... 00:11:03.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.372 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.373 --rc genhtml_branch_coverage=1 00:11:03.373 --rc genhtml_function_coverage=1 00:11:03.373 --rc genhtml_legend=1 00:11:03.373 --rc geninfo_all_blocks=1 00:11:03.373 --rc geninfo_unexecuted_blocks=1 00:11:03.373 00:11:03.373 ' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.373 --rc genhtml_branch_coverage=1 00:11:03.373 --rc genhtml_function_coverage=1 00:11:03.373 --rc genhtml_legend=1 00:11:03.373 --rc geninfo_all_blocks=1 00:11:03.373 --rc geninfo_unexecuted_blocks=1 00:11:03.373 00:11:03.373 ' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.373 --rc genhtml_branch_coverage=1 00:11:03.373 --rc genhtml_function_coverage=1 00:11:03.373 --rc genhtml_legend=1 00:11:03.373 --rc geninfo_all_blocks=1 00:11:03.373 --rc geninfo_unexecuted_blocks=1 00:11:03.373 00:11:03.373 ' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:03.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.373 --rc genhtml_branch_coverage=1 00:11:03.373 --rc genhtml_function_coverage=1 00:11:03.373 --rc genhtml_legend=1 00:11:03.373 --rc geninfo_all_blocks=1 00:11:03.373 --rc geninfo_unexecuted_blocks=1 00:11:03.373 00:11:03.373 ' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.373 17:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:09.944 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:09.945 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:09.945 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:09.945 Found net devices under 0000:86:00.0: cvl_0_0 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:09.945 Found net devices under 0000:86:00.1: cvl_0_1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:11:09.945 00:11:09.945 --- 10.0.0.2 ping statistics --- 00:11:09.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.945 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:11:09.945 00:11:09.945 --- 10.0.0.1 ping statistics --- 00:11:09.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.945 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=965505 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 965505 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 965505 ']' 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.945 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.946 [2024-10-14 17:28:08.581275] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:09.946 [2024-10-14 17:28:08.581321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.946 [2024-10-14 17:28:08.651749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.946 [2024-10-14 17:28:08.691294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.946 [2024-10-14 17:28:08.691330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.946 [2024-10-14 17:28:08.691337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.946 [2024-10-14 17:28:08.691343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.946 [2024-10-14 17:28:08.691348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.946 [2024-10-14 17:28:08.691921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.946 17:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.946 [2024-10-14 17:28:08.999187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:09.946 ************************************ 00:11:09.946 START TEST lvs_grow_clean 00:11:09.946 ************************************ 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:09.946 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:10.206 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:10.206 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:10.465 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:10.465 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:10.465 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:10.724 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:10.724 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:10.724 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cdc3c109-8f66-4786-9a2e-3239744a0edf lvol 150 00:11:10.982 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b6c64ab-8118-4ac2-ac74-251d813c9ded 00:11:10.982 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:10.982 17:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:10.982 [2024-10-14 17:28:10.068315] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:10.982 [2024-10-14 17:28:10.068376] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:10.982 true 00:11:10.982 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:10.982 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:11.242 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:11.242 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:11.501 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b6c64ab-8118-4ac2-ac74-251d813c9ded 00:11:11.760 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:11.761 [2024-10-14 17:28:10.818696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.761 17:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=965913 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 965913 /var/tmp/bdevperf.sock 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 965913 ']' 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:12.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.021 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:12.021 [2024-10-14 17:28:11.060497] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:12.021 [2024-10-14 17:28:11.060542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965913 ] 00:11:12.021 [2024-10-14 17:28:11.127009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.281 [2024-10-14 17:28:11.167125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.281 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.281 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:11:12.281 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:12.850 Nvme0n1 00:11:12.850 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:12.850 [ 00:11:12.850 { 00:11:12.850 "name": "Nvme0n1", 00:11:12.850 "aliases": [ 00:11:12.850 "6b6c64ab-8118-4ac2-ac74-251d813c9ded" 00:11:12.850 ], 00:11:12.850 "product_name": "NVMe disk", 00:11:12.850 "block_size": 4096, 00:11:12.850 "num_blocks": 38912, 00:11:12.850 "uuid": "6b6c64ab-8118-4ac2-ac74-251d813c9ded", 00:11:12.850 "numa_id": 1, 00:11:12.850 "assigned_rate_limits": { 00:11:12.850 "rw_ios_per_sec": 0, 00:11:12.850 "rw_mbytes_per_sec": 0, 00:11:12.850 "r_mbytes_per_sec": 0, 00:11:12.850 "w_mbytes_per_sec": 0 00:11:12.850 }, 00:11:12.850 "claimed": false, 00:11:12.850 "zoned": false, 00:11:12.850 "supported_io_types": { 00:11:12.850 "read": true, 00:11:12.850 "write": true, 00:11:12.850 "unmap": true, 00:11:12.850 "flush": true, 00:11:12.850 "reset": true, 00:11:12.850 "nvme_admin": true, 00:11:12.850 "nvme_io": true, 00:11:12.850 "nvme_io_md": false, 00:11:12.850 "write_zeroes": true, 00:11:12.850 "zcopy": false, 00:11:12.850 "get_zone_info": false, 00:11:12.850 "zone_management": false, 00:11:12.850 "zone_append": false, 00:11:12.850 "compare": true, 00:11:12.850 "compare_and_write": true, 00:11:12.850 "abort": true, 00:11:12.850 "seek_hole": false, 00:11:12.850 "seek_data": false, 00:11:12.850 "copy": true, 00:11:12.850 "nvme_iov_md": false 00:11:12.850 }, 00:11:12.850 "memory_domains": [ 00:11:12.850 { 00:11:12.850 "dma_device_id": "system", 00:11:12.850 "dma_device_type": 1 00:11:12.850 } 00:11:12.850 ], 00:11:12.850 "driver_specific": { 00:11:12.850 "nvme": [ 00:11:12.850 { 00:11:12.850 "trid": { 00:11:12.850 "trtype": "TCP", 00:11:12.850 "adrfam": "IPv4", 00:11:12.850 "traddr": "10.0.0.2", 00:11:12.850 "trsvcid": "4420", 00:11:12.850 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:12.850 }, 00:11:12.850 "ctrlr_data": { 00:11:12.850 "cntlid": 1, 00:11:12.850 "vendor_id": "0x8086", 00:11:12.850 "model_number": "SPDK bdev Controller", 00:11:12.850 "serial_number": "SPDK0", 00:11:12.850 "firmware_revision": "25.01", 00:11:12.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:12.850 "oacs": { 00:11:12.850 "security": 0, 00:11:12.850 "format": 0, 00:11:12.850 "firmware": 0, 00:11:12.850 "ns_manage": 0 00:11:12.850 }, 00:11:12.850 "multi_ctrlr": true, 00:11:12.850 "ana_reporting": false 00:11:12.850 }, 00:11:12.850 "vs": { 00:11:12.850 "nvme_version": "1.3" 00:11:12.850 }, 00:11:12.850 "ns_data": { 00:11:12.850 "id": 1, 00:11:12.850 "can_share": true 00:11:12.850 } 00:11:12.850 } 00:11:12.850 ], 00:11:12.850 "mp_policy": "active_passive" 00:11:12.850 } 00:11:12.850 } 00:11:12.850 ] 00:11:12.850 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=966142 00:11:12.850 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:12.850 17:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:13.109 Running I/O for 10 seconds... 00:11:14.046 Latency(us) 00:11:14.046 [2024-10-14T15:28:13.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.046 Nvme0n1 : 1.00 23275.00 90.92 0.00 0.00 0.00 0.00 0.00 00:11:14.046 [2024-10-14T15:28:13.184Z] =================================================================================================================== 00:11:14.046 [2024-10-14T15:28:13.184Z] Total : 23275.00 90.92 0.00 0.00 0.00 0.00 0.00 00:11:14.046 00:11:14.984 17:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:14.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.984 Nvme0n1 : 2.00 23448.50 91.60 0.00 0.00 0.00 0.00 0.00 00:11:14.984 [2024-10-14T15:28:14.122Z] =================================================================================================================== 00:11:14.984 [2024-10-14T15:28:14.122Z] Total : 23448.50 91.60 0.00 0.00 0.00 0.00 0.00 00:11:14.984 00:11:15.243 true 00:11:15.243 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:15.243 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:15.243 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:15.243 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:15.243 17:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 966142 00:11:16.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.180 Nvme0n1 : 3.00 23521.67 91.88 0.00 0.00 0.00 0.00 0.00 00:11:16.180 [2024-10-14T15:28:15.318Z] =================================================================================================================== 00:11:16.180 [2024-10-14T15:28:15.318Z] Total : 23521.67 91.88 0.00 0.00 0.00 0.00 0.00 00:11:16.180 00:11:17.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.116 Nvme0n1 : 4.00 23589.25 92.15 0.00 0.00 0.00 0.00 0.00 00:11:17.116 [2024-10-14T15:28:16.254Z] =================================================================================================================== 00:11:17.116 [2024-10-14T15:28:16.254Z] Total : 23589.25 92.15 0.00 0.00 0.00 0.00 0.00 00:11:17.116 00:11:18.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.053 Nvme0n1 : 5.00 23666.40 92.45 0.00 0.00 0.00 0.00 0.00 00:11:18.053 [2024-10-14T15:28:17.191Z] =================================================================================================================== 00:11:18.053 [2024-10-14T15:28:17.191Z] Total : 23666.40 92.45 0.00 0.00 0.00 0.00 0.00 00:11:18.054 00:11:18.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.989 Nvme0n1 : 6.00 23694.17 92.56 0.00 0.00 0.00 0.00 0.00 00:11:18.989 [2024-10-14T15:28:18.127Z] =================================================================================================================== 00:11:18.989 [2024-10-14T15:28:18.127Z] Total : 23694.17 92.56 0.00 0.00 0.00 0.00 0.00 00:11:18.989 00:11:19.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.926 Nvme0n1 : 7.00 23707.14 92.61 0.00 0.00 0.00 0.00 0.00 00:11:19.926 [2024-10-14T15:28:19.064Z] =================================================================================================================== 00:11:19.926 [2024-10-14T15:28:19.064Z] Total : 23707.14 92.61 0.00 0.00 0.00 0.00 0.00 00:11:19.926 00:11:21.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.305 Nvme0n1 : 8.00 23710.12 92.62 0.00 0.00 0.00 0.00 0.00 00:11:21.305 [2024-10-14T15:28:20.443Z] =================================================================================================================== 00:11:21.305 [2024-10-14T15:28:20.443Z] Total : 23710.12 92.62 0.00 0.00 0.00 0.00 0.00 00:11:21.305 00:11:22.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.240 Nvme0n1 : 9.00 23710.33 92.62 0.00 0.00 0.00 0.00 0.00 00:11:22.240 [2024-10-14T15:28:21.378Z] =================================================================================================================== 00:11:22.240 [2024-10-14T15:28:21.378Z] Total : 23710.33 92.62 0.00 0.00 0.00 0.00 0.00 00:11:22.240 00:11:23.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.176 Nvme0n1 : 10.00 23736.40 92.72 0.00 0.00 0.00 0.00 0.00 00:11:23.176 [2024-10-14T15:28:22.314Z] =================================================================================================================== 00:11:23.176 [2024-10-14T15:28:22.314Z] Total : 23736.40 92.72 0.00 0.00 0.00 0.00 0.00 00:11:23.176 00:11:23.176 00:11:23.176 Latency(us) 00:11:23.176 [2024-10-14T15:28:22.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.176 Nvme0n1 : 10.00 23739.55 92.73 0.00 0.00 5389.01 3198.78 13107.20 00:11:23.176 [2024-10-14T15:28:22.314Z] =================================================================================================================== 00:11:23.176 [2024-10-14T15:28:22.314Z] Total : 23739.55 92.73 0.00 0.00 5389.01 3198.78 13107.20 00:11:23.176 { 00:11:23.176 "results": [ 00:11:23.176 { 00:11:23.176 "job": "Nvme0n1", 00:11:23.176 "core_mask": "0x2", 00:11:23.176 "workload": "randwrite", 00:11:23.176 "status": "finished", 00:11:23.176 "queue_depth": 128, 00:11:23.176 "io_size": 4096, 00:11:23.176 "runtime": 10.004066, 00:11:23.176 "iops": 23739.547499986504, 00:11:23.176 "mibps": 92.73260742182228, 00:11:23.176 "io_failed": 0, 00:11:23.176 "io_timeout": 0, 00:11:23.176 "avg_latency_us": 5389.012200479134, 00:11:23.176 "min_latency_us": 3198.7809523809524, 00:11:23.176 "max_latency_us": 13107.2 00:11:23.176 } 00:11:23.176 ], 00:11:23.176 "core_count": 1 00:11:23.176 } 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 965913 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 965913 ']' 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 965913 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 965913 00:11:23.176 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:23.177 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:23.177 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 965913' 00:11:23.177 killing process with pid 965913 00:11:23.177 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 965913 00:11:23.177 Received shutdown signal, test time was about 10.000000 seconds 00:11:23.177 00:11:23.177 Latency(us) 00:11:23.177 [2024-10-14T15:28:22.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.177 [2024-10-14T15:28:22.315Z] =================================================================================================================== 00:11:23.177 [2024-10-14T15:28:22.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:23.177 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 965913 00:11:23.177 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:23.436 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:23.695 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:23.695 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:23.955 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:23.955 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:23.955 17:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:23.955 [2024-10-14 17:28:23.037235] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:23.955 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:24.214 request: 00:11:24.214 { 00:11:24.214 "uuid": "cdc3c109-8f66-4786-9a2e-3239744a0edf", 00:11:24.214 "method": "bdev_lvol_get_lvstores", 00:11:24.214 "req_id": 1 00:11:24.214 } 00:11:24.214 Got JSON-RPC error response 00:11:24.214 response: 00:11:24.214 { 00:11:24.214 "code": -19, 00:11:24.214 "message": "No such device" 00:11:24.214 } 00:11:24.214 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:24.214 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:24.214 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:24.214 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:24.214 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:24.473 aio_bdev 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b6c64ab-8118-4ac2-ac74-251d813c9ded 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6b6c64ab-8118-4ac2-ac74-251d813c9ded 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.473 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:24.732 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b6c64ab-8118-4ac2-ac74-251d813c9ded -t 2000 00:11:24.732 [ 00:11:24.732 { 00:11:24.732 "name": "6b6c64ab-8118-4ac2-ac74-251d813c9ded", 00:11:24.732 "aliases": [ 00:11:24.732 "lvs/lvol" 00:11:24.732 ], 00:11:24.732 "product_name": "Logical Volume", 00:11:24.732 "block_size": 4096, 00:11:24.732 "num_blocks": 38912, 00:11:24.732 "uuid": "6b6c64ab-8118-4ac2-ac74-251d813c9ded", 00:11:24.732 "assigned_rate_limits": { 00:11:24.732 "rw_ios_per_sec": 0, 00:11:24.732 "rw_mbytes_per_sec": 0, 00:11:24.732 "r_mbytes_per_sec": 0, 00:11:24.732 "w_mbytes_per_sec": 0 00:11:24.732 }, 00:11:24.732 "claimed": false, 00:11:24.732 "zoned": false, 00:11:24.732 "supported_io_types": { 00:11:24.732 "read": true, 00:11:24.732 "write": true, 00:11:24.732 "unmap": true, 00:11:24.732 "flush": false, 00:11:24.732 "reset": true, 00:11:24.732 "nvme_admin": false, 00:11:24.732 "nvme_io": false, 00:11:24.732 "nvme_io_md": false, 00:11:24.732 "write_zeroes": true, 00:11:24.732 "zcopy": false, 00:11:24.732 "get_zone_info": false, 00:11:24.732 "zone_management": false, 00:11:24.732 "zone_append": false, 00:11:24.732 "compare": false, 00:11:24.732 "compare_and_write": false, 00:11:24.732 "abort": false, 00:11:24.732 "seek_hole": true, 00:11:24.732 "seek_data": true, 00:11:24.732 "copy": false, 00:11:24.732 "nvme_iov_md": false 00:11:24.732 }, 00:11:24.732 "driver_specific": { 00:11:24.732 "lvol": { 00:11:24.732 "lvol_store_uuid": "cdc3c109-8f66-4786-9a2e-3239744a0edf", 00:11:24.732 "base_bdev": "aio_bdev", 00:11:24.732 "thin_provision": false, 00:11:24.732 "num_allocated_clusters": 38, 00:11:24.732 "snapshot": false, 00:11:24.732 "clone": false, 00:11:24.732 "esnap_clone": false 00:11:24.732 } 00:11:24.732 } 00:11:24.732 } 00:11:24.732 ] 00:11:24.732 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:24.732 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:24.732 17:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:24.991 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:24.991 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:24.991 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:25.249 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:25.249 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b6c64ab-8118-4ac2-ac74-251d813c9ded 00:11:25.249 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cdc3c109-8f66-4786-9a2e-3239744a0edf 00:11:25.508 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.767 00:11:25.767 real 0m15.736s 00:11:25.767 user 0m15.258s 00:11:25.767 sys 0m1.488s 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:25.767 ************************************ 00:11:25.767 END TEST lvs_grow_clean 00:11:25.767 ************************************ 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:25.767 ************************************ 00:11:25.767 START TEST lvs_grow_dirty 00:11:25.767 ************************************ 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.767 17:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.030 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:26.030 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:26.296 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf107701-930b-4eff-8494-05e14603ddd8 00:11:26.296 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:26.296 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf107701-930b-4eff-8494-05e14603ddd8 lvol 150 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.555 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:26.814 [2024-10-14 17:28:25.828467] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:26.814 [2024-10-14 17:28:25.828520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:26.814 true 00:11:26.814 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:26.814 17:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:27.073 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:27.073 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:27.332 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:27.332 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:27.591 [2024-10-14 17:28:26.558682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.591 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=968708 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 968708 /var/tmp/bdevperf.sock 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 968708 ']' 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:27.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.851 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 [2024-10-14 17:28:26.800783] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:27.851 [2024-10-14 17:28:26.800830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968708 ] 00:11:27.851 [2024-10-14 17:28:26.868841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.851 [2024-10-14 17:28:26.911011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.110 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.110 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:28.110 17:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:28.369 Nvme0n1 00:11:28.369 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:28.628 [ 00:11:28.628 { 00:11:28.628 "name": "Nvme0n1", 00:11:28.628 "aliases": [ 00:11:28.628 "1df79be2-74a0-43a5-bbdc-3eb9985a210b" 00:11:28.628 ], 00:11:28.628 "product_name": "NVMe disk", 00:11:28.628 "block_size": 4096, 00:11:28.628 "num_blocks": 38912, 00:11:28.628 "uuid": "1df79be2-74a0-43a5-bbdc-3eb9985a210b", 00:11:28.628 "numa_id": 1, 00:11:28.628 "assigned_rate_limits": { 00:11:28.628 "rw_ios_per_sec": 0, 00:11:28.628 "rw_mbytes_per_sec": 0, 00:11:28.628 "r_mbytes_per_sec": 0, 00:11:28.628 "w_mbytes_per_sec": 0 00:11:28.628 }, 00:11:28.628 "claimed": false, 00:11:28.628 "zoned": false, 00:11:28.628 "supported_io_types": { 00:11:28.628 "read": true, 00:11:28.628 "write": true, 00:11:28.628 "unmap": true, 00:11:28.628 "flush": true, 00:11:28.628 "reset": true, 00:11:28.628 "nvme_admin": true, 00:11:28.628 "nvme_io": true, 00:11:28.628 "nvme_io_md": false, 00:11:28.628 "write_zeroes": true, 00:11:28.628 "zcopy": false, 00:11:28.628 "get_zone_info": false, 00:11:28.628 "zone_management": false, 00:11:28.628 "zone_append": false, 00:11:28.628 "compare": true, 00:11:28.628 "compare_and_write": true, 00:11:28.628 "abort": true, 00:11:28.628 "seek_hole": false, 00:11:28.628 "seek_data": false, 00:11:28.628 "copy": true, 00:11:28.628 "nvme_iov_md": false 00:11:28.628 }, 00:11:28.628 "memory_domains": [ 00:11:28.628 { 00:11:28.628 "dma_device_id": "system", 00:11:28.628 "dma_device_type": 1 00:11:28.628 } 00:11:28.628 ], 00:11:28.628 "driver_specific": { 00:11:28.628 "nvme": [ 00:11:28.628 { 00:11:28.628 "trid": { 00:11:28.628 "trtype": "TCP", 00:11:28.628 "adrfam": "IPv4", 00:11:28.628 "traddr": "10.0.0.2", 00:11:28.628 "trsvcid": "4420", 00:11:28.628 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:28.628 }, 00:11:28.628 "ctrlr_data": { 00:11:28.628 "cntlid": 1, 00:11:28.628 "vendor_id": "0x8086", 00:11:28.628 "model_number": "SPDK bdev Controller", 00:11:28.628 "serial_number": "SPDK0", 00:11:28.628 "firmware_revision": "25.01", 00:11:28.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:28.628 "oacs": { 00:11:28.628 "security": 0, 00:11:28.628 "format": 0, 00:11:28.628 "firmware": 0, 00:11:28.628 "ns_manage": 0 00:11:28.628 }, 00:11:28.628 "multi_ctrlr": true, 00:11:28.628 "ana_reporting": false 00:11:28.628 }, 00:11:28.628 "vs": { 00:11:28.628 "nvme_version": "1.3" 00:11:28.628 }, 00:11:28.628 "ns_data": { 00:11:28.628 "id": 1, 00:11:28.628 "can_share": true 00:11:28.628 } 00:11:28.628 } 00:11:28.628 ], 00:11:28.628 "mp_policy": "active_passive" 00:11:28.628 } 00:11:28.628 } 00:11:28.628 ] 00:11:28.628 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=968743 00:11:28.628 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:28.628 17:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:28.628 Running I/O for 10 seconds... 00:11:29.565 Latency(us) 00:11:29.565 [2024-10-14T15:28:28.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.565 Nvme0n1 : 1.00 22437.00 87.64 0.00 0.00 0.00 0.00 0.00 00:11:29.565 [2024-10-14T15:28:28.703Z] =================================================================================================================== 00:11:29.565 [2024-10-14T15:28:28.703Z] Total : 22437.00 87.64 0.00 0.00 0.00 0.00 0.00 00:11:29.565 00:11:30.502 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:30.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.761 Nvme0n1 : 2.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:11:30.761 [2024-10-14T15:28:29.899Z] =================================================================================================================== 00:11:30.761 [2024-10-14T15:28:29.899Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:11:30.761 00:11:30.761 true 00:11:30.761 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:30.761 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:31.021 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:31.021 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:31.021 17:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 968743 00:11:31.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.589 Nvme0n1 : 3.00 22436.33 87.64 0.00 0.00 0.00 0.00 0.00 00:11:31.589 [2024-10-14T15:28:30.727Z] =================================================================================================================== 00:11:31.589 [2024-10-14T15:28:30.727Z] Total : 22436.33 87.64 0.00 0.00 0.00 0.00 0.00 00:11:31.589 00:11:32.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.969 Nvme0n1 : 4.00 22517.25 87.96 0.00 0.00 0.00 0.00 0.00 00:11:32.969 [2024-10-14T15:28:32.107Z] =================================================================================================================== 00:11:32.969 [2024-10-14T15:28:32.107Z] Total : 22517.25 87.96 0.00 0.00 0.00 0.00 0.00 00:11:32.969 00:11:33.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.536 Nvme0n1 : 5.00 22572.20 88.17 0.00 0.00 0.00 0.00 0.00 00:11:33.536 [2024-10-14T15:28:32.674Z] =================================================================================================================== 00:11:33.536 [2024-10-14T15:28:32.674Z] Total : 22572.20 88.17 0.00 0.00 0.00 0.00 0.00 00:11:33.536 00:11:34.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.915 Nvme0n1 : 6.00 22628.83 88.39 0.00 0.00 0.00 0.00 0.00 00:11:34.915 [2024-10-14T15:28:34.053Z] =================================================================================================================== 00:11:34.915 [2024-10-14T15:28:34.053Z] Total : 22628.83 88.39 0.00 0.00 0.00 0.00 0.00 00:11:34.915 00:11:35.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.854 Nvme0n1 : 7.00 22677.29 88.58 0.00 0.00 0.00 0.00 0.00 00:11:35.854 [2024-10-14T15:28:34.992Z] =================================================================================================================== 00:11:35.854 [2024-10-14T15:28:34.992Z] Total : 22677.29 88.58 0.00 0.00 0.00 0.00 0.00 00:11:35.854 00:11:36.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.792 Nvme0n1 : 8.00 22712.62 88.72 0.00 0.00 0.00 0.00 0.00 00:11:36.792 [2024-10-14T15:28:35.930Z] =================================================================================================================== 00:11:36.792 [2024-10-14T15:28:35.930Z] Total : 22712.62 88.72 0.00 0.00 0.00 0.00 0.00 00:11:36.792 00:11:37.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.730 Nvme0n1 : 9.00 22742.78 88.84 0.00 0.00 0.00 0.00 0.00 00:11:37.730 [2024-10-14T15:28:36.868Z] =================================================================================================================== 00:11:37.730 [2024-10-14T15:28:36.868Z] Total : 22742.78 88.84 0.00 0.00 0.00 0.00 0.00 00:11:37.730 00:11:38.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.669 Nvme0n1 : 10.00 22767.70 88.94 0.00 0.00 0.00 0.00 0.00 00:11:38.669 [2024-10-14T15:28:37.807Z] =================================================================================================================== 00:11:38.669 [2024-10-14T15:28:37.807Z] Total : 22767.70 88.94 0.00 0.00 0.00 0.00 0.00 00:11:38.669 00:11:38.669 00:11:38.669 Latency(us) 00:11:38.669 [2024-10-14T15:28:37.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.669 Nvme0n1 : 10.01 22767.92 88.94 0.00 0.00 5618.22 4244.24 9924.02 00:11:38.669 [2024-10-14T15:28:37.807Z] =================================================================================================================== 00:11:38.669 [2024-10-14T15:28:37.807Z] Total : 22767.92 88.94 0.00 0.00 5618.22 4244.24 9924.02 00:11:38.669 { 00:11:38.669 "results": [ 00:11:38.669 { 00:11:38.669 "job": "Nvme0n1", 00:11:38.669 "core_mask": "0x2", 00:11:38.669 "workload": "randwrite", 00:11:38.669 "status": "finished", 00:11:38.669 "queue_depth": 128, 00:11:38.669 "io_size": 4096, 00:11:38.669 "runtime": 10.005526, 00:11:38.669 "iops": 22767.91844826549, 00:11:38.669 "mibps": 88.93718143853707, 00:11:38.669 "io_failed": 0, 00:11:38.669 "io_timeout": 0, 00:11:38.669 "avg_latency_us": 5618.217769391324, 00:11:38.669 "min_latency_us": 4244.23619047619, 00:11:38.669 "max_latency_us": 9924.022857142858 00:11:38.669 } 00:11:38.669 ], 00:11:38.669 "core_count": 1 00:11:38.669 } 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 968708 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 968708 ']' 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 968708 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 968708 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 968708' 00:11:38.669 killing process with pid 968708 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 968708 00:11:38.669 Received shutdown signal, test time was about 10.000000 seconds 00:11:38.669 00:11:38.669 Latency(us) 00:11:38.669 [2024-10-14T15:28:37.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.669 [2024-10-14T15:28:37.807Z] =================================================================================================================== 00:11:38.669 [2024-10-14T15:28:37.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:38.669 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 968708 00:11:38.928 17:28:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.187 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:39.187 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:39.187 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 965505 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 965505 00:11:39.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 965505 Killed "${NVMF_APP[@]}" "$@" 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=970593 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 970593 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 970593 ']' 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.446 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.446 [2024-10-14 17:28:38.580283] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:39.446 [2024-10-14 17:28:38.580333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.706 [2024-10-14 17:28:38.650836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.706 [2024-10-14 17:28:38.691432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.706 [2024-10-14 17:28:38.691468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.706 [2024-10-14 17:28:38.691475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.706 [2024-10-14 17:28:38.691481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.706 [2024-10-14 17:28:38.691486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.706 [2024-10-14 17:28:38.692039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.706 17:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:39.965 [2024-10-14 17:28:38.992925] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:39.965 [2024-10-14 17:28:38.993011] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:39.965 [2024-10-14 17:28:38.993037] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.965 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:40.225 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1df79be2-74a0-43a5-bbdc-3eb9985a210b -t 2000 00:11:40.484 [ 00:11:40.484 { 00:11:40.484 "name": "1df79be2-74a0-43a5-bbdc-3eb9985a210b", 00:11:40.484 "aliases": [ 00:11:40.484 "lvs/lvol" 00:11:40.484 ], 00:11:40.484 "product_name": "Logical Volume", 00:11:40.484 "block_size": 4096, 00:11:40.484 "num_blocks": 38912, 00:11:40.484 "uuid": "1df79be2-74a0-43a5-bbdc-3eb9985a210b", 00:11:40.484 "assigned_rate_limits": { 00:11:40.484 "rw_ios_per_sec": 0, 00:11:40.484 "rw_mbytes_per_sec": 0, 00:11:40.484 "r_mbytes_per_sec": 0, 00:11:40.484 "w_mbytes_per_sec": 0 00:11:40.484 }, 00:11:40.484 "claimed": false, 00:11:40.484 "zoned": false, 00:11:40.484 "supported_io_types": { 00:11:40.484 "read": true, 00:11:40.484 "write": true, 00:11:40.484 "unmap": true, 00:11:40.484 "flush": false, 00:11:40.484 "reset": true, 00:11:40.484 "nvme_admin": false, 00:11:40.484 "nvme_io": false, 00:11:40.484 "nvme_io_md": false, 00:11:40.484 "write_zeroes": true, 00:11:40.484 "zcopy": false, 00:11:40.484 "get_zone_info": false, 00:11:40.484 "zone_management": false, 00:11:40.484 "zone_append": false, 00:11:40.484 "compare": false, 00:11:40.484 "compare_and_write": false, 00:11:40.484 "abort": false, 00:11:40.484 "seek_hole": true, 00:11:40.484 "seek_data": true, 00:11:40.484 "copy": false, 00:11:40.484 "nvme_iov_md": false 00:11:40.484 }, 00:11:40.484 "driver_specific": { 00:11:40.484 "lvol": { 00:11:40.484 "lvol_store_uuid": "cf107701-930b-4eff-8494-05e14603ddd8", 00:11:40.484 "base_bdev": "aio_bdev", 00:11:40.484 "thin_provision": false, 00:11:40.484 "num_allocated_clusters": 38, 00:11:40.484 "snapshot": false, 00:11:40.484 "clone": false, 00:11:40.484 "esnap_clone": false 00:11:40.484 } 00:11:40.484 } 00:11:40.484 } 00:11:40.484 ] 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:40.484 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:40.744 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:40.744 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:41.005 [2024-10-14 17:28:39.949706] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:41.005 17:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:41.264 request: 00:11:41.264 { 00:11:41.264 "uuid": "cf107701-930b-4eff-8494-05e14603ddd8", 00:11:41.264 "method": "bdev_lvol_get_lvstores", 00:11:41.264 "req_id": 1 00:11:41.264 } 00:11:41.264 Got JSON-RPC error response 00:11:41.264 response: 00:11:41.265 { 00:11:41.265 "code": -19, 00:11:41.265 "message": "No such device" 00:11:41.265 } 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.265 aio_bdev 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.265 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:41.524 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1df79be2-74a0-43a5-bbdc-3eb9985a210b -t 2000 00:11:41.782 [ 00:11:41.782 { 00:11:41.782 "name": "1df79be2-74a0-43a5-bbdc-3eb9985a210b", 00:11:41.782 "aliases": [ 00:11:41.782 "lvs/lvol" 00:11:41.782 ], 00:11:41.782 "product_name": "Logical Volume", 00:11:41.782 "block_size": 4096, 00:11:41.782 "num_blocks": 38912, 00:11:41.782 "uuid": "1df79be2-74a0-43a5-bbdc-3eb9985a210b", 00:11:41.782 "assigned_rate_limits": { 00:11:41.782 "rw_ios_per_sec": 0, 00:11:41.782 "rw_mbytes_per_sec": 0, 00:11:41.782 "r_mbytes_per_sec": 0, 00:11:41.782 "w_mbytes_per_sec": 0 00:11:41.782 }, 00:11:41.782 "claimed": false, 00:11:41.782 "zoned": false, 00:11:41.782 "supported_io_types": { 00:11:41.782 "read": true, 00:11:41.782 "write": true, 00:11:41.782 "unmap": true, 00:11:41.782 "flush": false, 00:11:41.782 "reset": true, 00:11:41.782 "nvme_admin": false, 00:11:41.782 "nvme_io": false, 00:11:41.782 "nvme_io_md": false, 00:11:41.782 "write_zeroes": true, 00:11:41.782 "zcopy": false, 00:11:41.782 "get_zone_info": false, 00:11:41.782 "zone_management": false, 00:11:41.782 "zone_append": false, 00:11:41.782 "compare": false, 00:11:41.782 "compare_and_write": false, 00:11:41.782 "abort": false, 00:11:41.782 "seek_hole": true, 00:11:41.782 "seek_data": true, 00:11:41.782 "copy": false, 00:11:41.782 "nvme_iov_md": false 00:11:41.782 }, 00:11:41.782 "driver_specific": { 00:11:41.782 "lvol": { 00:11:41.782 "lvol_store_uuid": "cf107701-930b-4eff-8494-05e14603ddd8", 00:11:41.782 "base_bdev": "aio_bdev", 00:11:41.782 "thin_provision": false, 00:11:41.782 "num_allocated_clusters": 38, 00:11:41.782 "snapshot": false, 00:11:41.782 "clone": false, 00:11:41.782 "esnap_clone": false 00:11:41.782 } 00:11:41.782 } 00:11:41.782 } 00:11:41.782 ] 00:11:41.782 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:41.782 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:41.782 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:42.041 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:42.041 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:42.041 17:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:42.041 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:42.041 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1df79be2-74a0-43a5-bbdc-3eb9985a210b 00:11:42.299 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf107701-930b-4eff-8494-05e14603ddd8 00:11:42.558 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.558 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:42.817 00:11:42.817 real 0m16.855s 00:11:42.817 user 0m43.387s 00:11:42.817 sys 0m4.120s 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:42.817 ************************************ 00:11:42.817 END TEST lvs_grow_dirty 00:11:42.817 ************************************ 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:42.817 nvmf_trace.0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.817 rmmod nvme_tcp 00:11:42.817 rmmod nvme_fabrics 00:11:42.817 rmmod nvme_keyring 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 970593 ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 970593 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 970593 ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 970593 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970593 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970593' 00:11:42.817 killing process with pid 970593 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 970593 00:11:42.817 17:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 970593 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.077 17:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.616 00:11:45.616 real 0m41.910s 00:11:45.616 user 1m4.309s 00:11:45.616 sys 0m10.555s 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:45.616 ************************************ 00:11:45.616 END TEST nvmf_lvs_grow 00:11:45.616 ************************************ 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.616 ************************************ 00:11:45.616 START TEST nvmf_bdev_io_wait 00:11:45.616 ************************************ 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:45.616 * Looking for test storage... 00:11:45.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.616 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.617 --rc genhtml_branch_coverage=1 00:11:45.617 --rc genhtml_function_coverage=1 00:11:45.617 --rc genhtml_legend=1 00:11:45.617 --rc geninfo_all_blocks=1 00:11:45.617 --rc geninfo_unexecuted_blocks=1 00:11:45.617 00:11:45.617 ' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.617 --rc genhtml_branch_coverage=1 00:11:45.617 --rc genhtml_function_coverage=1 00:11:45.617 --rc genhtml_legend=1 00:11:45.617 --rc geninfo_all_blocks=1 00:11:45.617 --rc geninfo_unexecuted_blocks=1 00:11:45.617 00:11:45.617 ' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.617 --rc genhtml_branch_coverage=1 00:11:45.617 --rc genhtml_function_coverage=1 00:11:45.617 --rc genhtml_legend=1 00:11:45.617 --rc geninfo_all_blocks=1 00:11:45.617 --rc geninfo_unexecuted_blocks=1 00:11:45.617 00:11:45.617 ' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.617 --rc genhtml_branch_coverage=1 00:11:45.617 --rc genhtml_function_coverage=1 00:11:45.617 --rc genhtml_legend=1 00:11:45.617 --rc geninfo_all_blocks=1 00:11:45.617 --rc geninfo_unexecuted_blocks=1 00:11:45.617 00:11:45.617 ' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.617 17:28:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:52.197 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:52.197 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:52.197 Found net devices under 0000:86:00.0: cvl_0_0 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.197 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:52.198 Found net devices under 0000:86:00.1: cvl_0_1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:11:52.198 00:11:52.198 --- 10.0.0.2 ping statistics --- 00:11:52.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.198 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:11:52.198 00:11:52.198 --- 10.0.0.1 ping statistics --- 00:11:52.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.198 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=974787 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 974787 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 974787 ']' 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 [2024-10-14 17:28:50.522395] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:52.198 [2024-10-14 17:28:50.522441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.198 [2024-10-14 17:28:50.598875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.198 [2024-10-14 17:28:50.642300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.198 [2024-10-14 17:28:50.642336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.198 [2024-10-14 17:28:50.642343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.198 [2024-10-14 17:28:50.642349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.198 [2024-10-14 17:28:50.642354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.198 [2024-10-14 17:28:50.643816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.198 [2024-10-14 17:28:50.643879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.198 [2024-10-14 17:28:50.643964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.198 [2024-10-14 17:28:50.643965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 [2024-10-14 17:28:50.779609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 Malloc0 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.199 [2024-10-14 17:28:50.834655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=974900 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=974902 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:52.199 { 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme$subsystem", 00:11:52.199 "trtype": "$TEST_TRANSPORT", 00:11:52.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "$NVMF_PORT", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.199 "hdgst": ${hdgst:-false}, 00:11:52.199 "ddgst": ${ddgst:-false} 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 } 00:11:52.199 EOF 00:11:52.199 )") 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=974904 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:52.199 { 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme$subsystem", 00:11:52.199 "trtype": "$TEST_TRANSPORT", 00:11:52.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "$NVMF_PORT", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.199 "hdgst": ${hdgst:-false}, 00:11:52.199 "ddgst": ${ddgst:-false} 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 } 00:11:52.199 EOF 00:11:52.199 )") 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=974907 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:52.199 { 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme$subsystem", 00:11:52.199 "trtype": "$TEST_TRANSPORT", 00:11:52.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "$NVMF_PORT", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.199 "hdgst": ${hdgst:-false}, 00:11:52.199 "ddgst": ${ddgst:-false} 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 } 00:11:52.199 EOF 00:11:52.199 )") 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:52.199 { 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme$subsystem", 00:11:52.199 "trtype": "$TEST_TRANSPORT", 00:11:52.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "$NVMF_PORT", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.199 "hdgst": ${hdgst:-false}, 00:11:52.199 "ddgst": ${ddgst:-false} 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 } 00:11:52.199 EOF 00:11:52.199 )") 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 974900 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme1", 00:11:52.199 "trtype": "tcp", 00:11:52.199 "traddr": "10.0.0.2", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "4420", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.199 "hdgst": false, 00:11:52.199 "ddgst": false 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 }' 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme1", 00:11:52.199 "trtype": "tcp", 00:11:52.199 "traddr": "10.0.0.2", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "4420", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.199 "hdgst": false, 00:11:52.199 "ddgst": false 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 }' 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme1", 00:11:52.199 "trtype": "tcp", 00:11:52.199 "traddr": "10.0.0.2", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "4420", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.199 "hdgst": false, 00:11:52.199 "ddgst": false 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 }' 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:52.199 17:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:52.199 "params": { 00:11:52.199 "name": "Nvme1", 00:11:52.199 "trtype": "tcp", 00:11:52.199 "traddr": "10.0.0.2", 00:11:52.199 "adrfam": "ipv4", 00:11:52.199 "trsvcid": "4420", 00:11:52.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.199 "hdgst": false, 00:11:52.199 "ddgst": false 00:11:52.199 }, 00:11:52.199 "method": "bdev_nvme_attach_controller" 00:11:52.199 }' 00:11:52.199 [2024-10-14 17:28:50.886555] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:52.200 [2024-10-14 17:28:50.886633] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:52.200 [2024-10-14 17:28:50.889065] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:52.200 [2024-10-14 17:28:50.889110] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:52.200 [2024-10-14 17:28:50.890833] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:52.200 [2024-10-14 17:28:50.890875] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:52.200 [2024-10-14 17:28:50.891499] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:11:52.200 [2024-10-14 17:28:50.891538] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:52.200 [2024-10-14 17:28:51.070685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.200 [2024-10-14 17:28:51.108020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.200 [2024-10-14 17:28:51.129720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:52.200 [2024-10-14 17:28:51.150475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:52.200 [2024-10-14 17:28:51.182155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.200 [2024-10-14 17:28:51.217583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:52.200 [2024-10-14 17:28:51.283135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.459 [2024-10-14 17:28:51.342580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:52.459 Running I/O for 1 seconds... 00:11:52.459 Running I/O for 1 seconds... 00:11:52.459 Running I/O for 1 seconds... 00:11:52.459 Running I/O for 1 seconds... 00:11:53.397 12501.00 IOPS, 48.83 MiB/s 00:11:53.397 Latency(us) 00:11:53.397 [2024-10-14T15:28:52.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.397 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:53.397 Nvme1n1 : 1.01 12558.18 49.06 0.00 0.00 10160.26 5430.13 16727.28 00:11:53.397 [2024-10-14T15:28:52.535Z] =================================================================================================================== 00:11:53.397 [2024-10-14T15:28:52.535Z] Total : 12558.18 49.06 0.00 0.00 10160.26 5430.13 16727.28 00:11:53.397 254328.00 IOPS, 993.47 MiB/s 00:11:53.397 Latency(us) 00:11:53.397 [2024-10-14T15:28:52.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.397 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:53.397 Nvme1n1 : 1.00 253948.22 991.99 0.00 0.00 501.25 227.23 1490.16 00:11:53.397 [2024-10-14T15:28:52.535Z] =================================================================================================================== 00:11:53.397 [2024-10-14T15:28:52.535Z] Total : 253948.22 991.99 0.00 0.00 501.25 227.23 1490.16 00:11:53.397 9853.00 IOPS, 38.49 MiB/s [2024-10-14T15:28:52.535Z] 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 974902 00:11:53.397 00:11:53.397 Latency(us) 00:11:53.398 [2024-10-14T15:28:52.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.398 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:53.398 Nvme1n1 : 1.01 9912.00 38.72 0.00 0.00 12865.37 6116.69 19348.72 00:11:53.398 [2024-10-14T15:28:52.536Z] =================================================================================================================== 00:11:53.398 [2024-10-14T15:28:52.536Z] Total : 9912.00 38.72 0.00 0.00 12865.37 6116.69 19348.72 00:11:53.657 11283.00 IOPS, 44.07 MiB/s 00:11:53.657 Latency(us) 00:11:53.657 [2024-10-14T15:28:52.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.657 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:53.657 Nvme1n1 : 1.01 11367.12 44.40 0.00 0.00 11231.91 3198.78 24591.60 00:11:53.657 [2024-10-14T15:28:52.795Z] =================================================================================================================== 00:11:53.657 [2024-10-14T15:28:52.795Z] Total : 11367.12 44.40 0.00 0.00 11231.91 3198.78 24591.60 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 974904 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 974907 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.657 rmmod nvme_tcp 00:11:53.657 rmmod nvme_fabrics 00:11:53.657 rmmod nvme_keyring 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 974787 ']' 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 974787 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 974787 ']' 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 974787 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 974787 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 974787' 00:11:53.657 killing process with pid 974787 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 974787 00:11:53.657 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 974787 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.917 17:28:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.457 00:11:56.457 real 0m10.791s 00:11:56.457 user 0m15.866s 00:11:56.457 sys 0m6.284s 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:56.457 ************************************ 00:11:56.457 END TEST nvmf_bdev_io_wait 00:11:56.457 ************************************ 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.457 ************************************ 00:11:56.457 START TEST nvmf_queue_depth 00:11:56.457 ************************************ 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:56.457 * Looking for test storage... 00:11:56.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.457 --rc genhtml_branch_coverage=1 00:11:56.457 --rc genhtml_function_coverage=1 00:11:56.457 --rc genhtml_legend=1 00:11:56.457 --rc geninfo_all_blocks=1 00:11:56.457 --rc geninfo_unexecuted_blocks=1 00:11:56.457 00:11:56.457 ' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.457 --rc genhtml_branch_coverage=1 00:11:56.457 --rc genhtml_function_coverage=1 00:11:56.457 --rc genhtml_legend=1 00:11:56.457 --rc geninfo_all_blocks=1 00:11:56.457 --rc geninfo_unexecuted_blocks=1 00:11:56.457 00:11:56.457 ' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.457 --rc genhtml_branch_coverage=1 00:11:56.457 --rc genhtml_function_coverage=1 00:11:56.457 --rc genhtml_legend=1 00:11:56.457 --rc geninfo_all_blocks=1 00:11:56.457 --rc geninfo_unexecuted_blocks=1 00:11:56.457 00:11:56.457 ' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.457 --rc genhtml_branch_coverage=1 00:11:56.457 --rc genhtml_function_coverage=1 00:11:56.457 --rc genhtml_legend=1 00:11:56.457 --rc geninfo_all_blocks=1 00:11:56.457 --rc geninfo_unexecuted_blocks=1 00:11:56.457 00:11:56.457 ' 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.457 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.458 17:28:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:03.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:03.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.035 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:03.035 Found net devices under 0000:86:00.0: cvl_0_0 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:03.036 Found net devices under 0000:86:00.1: cvl_0_1 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.036 17:29:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:12:03.036 00:12:03.036 --- 10.0.0.2 ping statistics --- 00:12:03.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.036 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:03.036 00:12:03.036 --- 10.0.0.1 ping statistics --- 00:12:03.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.036 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=978692 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 978692 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 978692 ']' 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 [2024-10-14 17:29:01.322666] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:12:03.036 [2024-10-14 17:29:01.322706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.036 [2024-10-14 17:29:01.397154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.036 [2024-10-14 17:29:01.439966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.036 [2024-10-14 17:29:01.439997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.036 [2024-10-14 17:29:01.440004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.036 [2024-10-14 17:29:01.440010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.036 [2024-10-14 17:29:01.440015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.036 [2024-10-14 17:29:01.440563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 [2024-10-14 17:29:01.580017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 Malloc0 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.036 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.037 [2024-10-14 17:29:01.630340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=978834 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 978834 /var/tmp/bdevperf.sock 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 978834 ']' 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.037 [2024-10-14 17:29:01.682657] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:12:03.037 [2024-10-14 17:29:01.682700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978834 ] 00:12:03.037 [2024-10-14 17:29:01.750994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.037 [2024-10-14 17:29:01.793313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.037 17:29:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:03.037 NVMe0n1 00:12:03.037 17:29:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.037 17:29:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:03.297 Running I/O for 10 seconds... 00:12:05.173 12239.00 IOPS, 47.81 MiB/s [2024-10-14T15:29:05.249Z] 12286.00 IOPS, 47.99 MiB/s [2024-10-14T15:29:06.628Z] 12434.33 IOPS, 48.57 MiB/s [2024-10-14T15:29:07.633Z] 12521.50 IOPS, 48.91 MiB/s [2024-10-14T15:29:08.276Z] 12527.20 IOPS, 48.93 MiB/s [2024-10-14T15:29:09.252Z] 12575.50 IOPS, 49.12 MiB/s [2024-10-14T15:29:10.630Z] 12567.86 IOPS, 49.09 MiB/s [2024-10-14T15:29:11.569Z] 12594.00 IOPS, 49.20 MiB/s [2024-10-14T15:29:12.506Z] 12605.67 IOPS, 49.24 MiB/s [2024-10-14T15:29:12.506Z] 12595.50 IOPS, 49.20 MiB/s 00:12:13.368 Latency(us) 00:12:13.368 [2024-10-14T15:29:12.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.368 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:13.368 Verification LBA range: start 0x0 length 0x4000 00:12:13.368 NVMe0n1 : 10.05 12635.35 49.36 0.00 0.00 80771.81 9674.36 54426.09 00:12:13.368 [2024-10-14T15:29:12.506Z] =================================================================================================================== 00:12:13.368 [2024-10-14T15:29:12.506Z] Total : 12635.35 49.36 0.00 0.00 80771.81 9674.36 54426.09 00:12:13.368 { 00:12:13.368 "results": [ 00:12:13.368 { 00:12:13.368 "job": "NVMe0n1", 00:12:13.368 "core_mask": "0x1", 00:12:13.368 "workload": "verify", 00:12:13.368 "status": "finished", 00:12:13.368 "verify_range": { 00:12:13.368 "start": 0, 00:12:13.368 "length": 16384 00:12:13.368 }, 00:12:13.368 "queue_depth": 1024, 00:12:13.368 "io_size": 4096, 00:12:13.368 "runtime": 10.049505, 00:12:13.368 "iops": 12635.34870622981, 00:12:13.368 "mibps": 49.356830883710195, 00:12:13.368 "io_failed": 0, 00:12:13.368 "io_timeout": 0, 00:12:13.368 "avg_latency_us": 80771.80937044333, 00:12:13.368 "min_latency_us": 9674.361904761905, 00:12:13.368 "max_latency_us": 54426.08761904762 00:12:13.368 } 00:12:13.368 ], 00:12:13.368 "core_count": 1 00:12:13.368 } 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 978834 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 978834 ']' 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 978834 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 978834 00:12:13.368 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 978834' 00:12:13.369 killing process with pid 978834 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 978834 00:12:13.369 Received shutdown signal, test time was about 10.000000 seconds 00:12:13.369 00:12:13.369 Latency(us) 00:12:13.369 [2024-10-14T15:29:12.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.369 [2024-10-14T15:29:12.507Z] =================================================================================================================== 00:12:13.369 [2024-10-14T15:29:12.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 978834 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:13.369 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.628 rmmod nvme_tcp 00:12:13.628 rmmod nvme_fabrics 00:12:13.628 rmmod nvme_keyring 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 978692 ']' 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 978692 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 978692 ']' 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 978692 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 978692 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 978692' 00:12:13.628 killing process with pid 978692 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 978692 00:12:13.628 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 978692 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.887 17:29:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.791 00:12:15.791 real 0m19.790s 00:12:15.791 user 0m23.235s 00:12:15.791 sys 0m6.006s 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.791 ************************************ 00:12:15.791 END TEST nvmf_queue_depth 00:12:15.791 ************************************ 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.791 17:29:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 ************************************ 00:12:16.050 START TEST nvmf_target_multipath 00:12:16.050 ************************************ 00:12:16.050 17:29:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:16.050 * Looking for test storage... 00:12:16.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.050 --rc genhtml_legend=1 00:12:16.050 --rc geninfo_all_blocks=1 00:12:16.050 --rc geninfo_unexecuted_blocks=1 00:12:16.050 00:12:16.050 ' 00:12:16.050 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.050 --rc genhtml_branch_coverage=1 00:12:16.050 --rc genhtml_function_coverage=1 00:12:16.051 --rc genhtml_legend=1 00:12:16.051 --rc geninfo_all_blocks=1 00:12:16.051 --rc geninfo_unexecuted_blocks=1 00:12:16.051 00:12:16.051 ' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.051 17:29:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.622 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:22.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:22.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:22.623 Found net devices under 0000:86:00.0: cvl_0_0 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:22.623 Found net devices under 0000:86:00.1: cvl_0_1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.623 17:29:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:12:22.623 00:12:22.623 --- 10.0.0.2 ping statistics --- 00:12:22.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.623 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:22.623 00:12:22.623 --- 10.0.0.1 ping statistics --- 00:12:22.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.623 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:22.623 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:22.624 only one NIC for nvmf test 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.624 rmmod nvme_tcp 00:12:22.624 rmmod nvme_fabrics 00:12:22.624 rmmod nvme_keyring 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.624 17:29:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.531 00:12:24.531 real 0m8.370s 00:12:24.531 user 0m1.793s 00:12:24.531 sys 0m4.585s 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 ************************************ 00:12:24.531 END TEST nvmf_target_multipath 00:12:24.531 ************************************ 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:24.531 ************************************ 00:12:24.531 START TEST nvmf_zcopy 00:12:24.531 ************************************ 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:24.531 * Looking for test storage... 00:12:24.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:24.531 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:24.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.532 --rc genhtml_branch_coverage=1 00:12:24.532 --rc genhtml_function_coverage=1 00:12:24.532 --rc genhtml_legend=1 00:12:24.532 --rc geninfo_all_blocks=1 00:12:24.532 --rc geninfo_unexecuted_blocks=1 00:12:24.532 00:12:24.532 ' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:24.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.532 --rc genhtml_branch_coverage=1 00:12:24.532 --rc genhtml_function_coverage=1 00:12:24.532 --rc genhtml_legend=1 00:12:24.532 --rc geninfo_all_blocks=1 00:12:24.532 --rc geninfo_unexecuted_blocks=1 00:12:24.532 00:12:24.532 ' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:24.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.532 --rc genhtml_branch_coverage=1 00:12:24.532 --rc genhtml_function_coverage=1 00:12:24.532 --rc genhtml_legend=1 00:12:24.532 --rc geninfo_all_blocks=1 00:12:24.532 --rc geninfo_unexecuted_blocks=1 00:12:24.532 00:12:24.532 ' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:24.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.532 --rc genhtml_branch_coverage=1 00:12:24.532 --rc genhtml_function_coverage=1 00:12:24.532 --rc genhtml_legend=1 00:12:24.532 --rc geninfo_all_blocks=1 00:12:24.532 --rc geninfo_unexecuted_blocks=1 00:12:24.532 00:12:24.532 ' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.532 17:29:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:31.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:31.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.103 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:31.104 Found net devices under 0000:86:00.0: cvl_0_0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:31.104 Found net devices under 0000:86:00.1: cvl_0_1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:12:31.104 00:12:31.104 --- 10.0.0.2 ping statistics --- 00:12:31.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.104 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:12:31.104 00:12:31.104 --- 10.0.0.1 ping statistics --- 00:12:31.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.104 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=987715 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 987715 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 987715 ']' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 [2024-10-14 17:29:29.669435] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:12:31.104 [2024-10-14 17:29:29.669481] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.104 [2024-10-14 17:29:29.741665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.104 [2024-10-14 17:29:29.782091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.104 [2024-10-14 17:29:29.782127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.104 [2024-10-14 17:29:29.782138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.104 [2024-10-14 17:29:29.782144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.104 [2024-10-14 17:29:29.782149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.104 [2024-10-14 17:29:29.782724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 [2024-10-14 17:29:29.917925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.104 [2024-10-14 17:29:29.938112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.104 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.105 malloc0 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:31.105 { 00:12:31.105 "params": { 00:12:31.105 "name": "Nvme$subsystem", 00:12:31.105 "trtype": "$TEST_TRANSPORT", 00:12:31.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.105 "adrfam": "ipv4", 00:12:31.105 "trsvcid": "$NVMF_PORT", 00:12:31.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.105 "hdgst": ${hdgst:-false}, 00:12:31.105 "ddgst": ${ddgst:-false} 00:12:31.105 }, 00:12:31.105 "method": "bdev_nvme_attach_controller" 00:12:31.105 } 00:12:31.105 EOF 00:12:31.105 )") 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:31.105 17:29:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:31.105 "params": { 00:12:31.105 "name": "Nvme1", 00:12:31.105 "trtype": "tcp", 00:12:31.105 "traddr": "10.0.0.2", 00:12:31.105 "adrfam": "ipv4", 00:12:31.105 "trsvcid": "4420", 00:12:31.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.105 "hdgst": false, 00:12:31.105 "ddgst": false 00:12:31.105 }, 00:12:31.105 "method": "bdev_nvme_attach_controller" 00:12:31.105 }' 00:12:31.105 [2024-10-14 17:29:30.022572] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:12:31.105 [2024-10-14 17:29:30.022624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987882 ] 00:12:31.105 [2024-10-14 17:29:30.090867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.105 [2024-10-14 17:29:30.131795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.365 Running I/O for 10 seconds... 00:12:33.681 8613.00 IOPS, 67.29 MiB/s [2024-10-14T15:29:33.756Z] 8681.50 IOPS, 67.82 MiB/s [2024-10-14T15:29:34.694Z] 8718.33 IOPS, 68.11 MiB/s [2024-10-14T15:29:35.629Z] 8726.50 IOPS, 68.18 MiB/s [2024-10-14T15:29:36.566Z] 8745.20 IOPS, 68.32 MiB/s [2024-10-14T15:29:37.503Z] 8755.33 IOPS, 68.40 MiB/s [2024-10-14T15:29:38.883Z] 8757.00 IOPS, 68.41 MiB/s [2024-10-14T15:29:39.820Z] 8762.62 IOPS, 68.46 MiB/s [2024-10-14T15:29:40.758Z] 8766.89 IOPS, 68.49 MiB/s [2024-10-14T15:29:40.758Z] 8757.10 IOPS, 68.41 MiB/s 00:12:41.620 Latency(us) 00:12:41.620 [2024-10-14T15:29:40.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:41.620 Verification LBA range: start 0x0 length 0x1000 00:12:41.620 Nvme1n1 : 10.01 8756.59 68.41 0.00 0.00 14576.78 1958.28 23343.30 00:12:41.620 [2024-10-14T15:29:40.758Z] =================================================================================================================== 00:12:41.620 [2024-10-14T15:29:40.758Z] Total : 8756.59 68.41 0.00 0.00 14576.78 1958.28 23343.30 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=989516 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:41.620 { 00:12:41.620 "params": { 00:12:41.620 "name": "Nvme$subsystem", 00:12:41.620 "trtype": "$TEST_TRANSPORT", 00:12:41.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.620 "adrfam": "ipv4", 00:12:41.620 "trsvcid": "$NVMF_PORT", 00:12:41.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.620 "hdgst": ${hdgst:-false}, 00:12:41.620 "ddgst": ${ddgst:-false} 00:12:41.620 }, 00:12:41.620 "method": "bdev_nvme_attach_controller" 00:12:41.620 } 00:12:41.620 EOF 00:12:41.620 )") 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:41.620 [2024-10-14 17:29:40.647365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.647397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:41.620 17:29:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:41.620 "params": { 00:12:41.620 "name": "Nvme1", 00:12:41.620 "trtype": "tcp", 00:12:41.620 "traddr": "10.0.0.2", 00:12:41.620 "adrfam": "ipv4", 00:12:41.620 "trsvcid": "4420", 00:12:41.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.620 "hdgst": false, 00:12:41.620 "ddgst": false 00:12:41.620 }, 00:12:41.620 "method": "bdev_nvme_attach_controller" 00:12:41.620 }' 00:12:41.620 [2024-10-14 17:29:40.659359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.659374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.671390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.671401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.683421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.683431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.684831] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:12:41.620 [2024-10-14 17:29:40.684870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989516 ] 00:12:41.620 [2024-10-14 17:29:40.695459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.695474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.707485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.707494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.719520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.719529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.731551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.731560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.743582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.743591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.620 [2024-10-14 17:29:40.753206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.620 [2024-10-14 17:29:40.755621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.620 [2024-10-14 17:29:40.755631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.767650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.767664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.779683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.779699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.791712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.791722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.795005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.880 [2024-10-14 17:29:40.803746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.803756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.815785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.815805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.827811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.827829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.839841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.839854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.851881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.851893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.863912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.863924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.875944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.875953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.887990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.888011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.900015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.900029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.912049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.912064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.924081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.924096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.936109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.936121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.948144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.948162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 Running I/O for 5 seconds... 00:12:41.880 [2024-10-14 17:29:40.960177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.960188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.974589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.974627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:40.988564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:40.988582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:41.002161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:41.002180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.880 [2024-10-14 17:29:41.016303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.880 [2024-10-14 17:29:41.016322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.029954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.029974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.044183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.044202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.058286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.058304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.072374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.072392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.086127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.086145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.099840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.099857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.113483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.113500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.127512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.127530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.141779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.141797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.139 [2024-10-14 17:29:41.155612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.139 [2024-10-14 17:29:41.155630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.169527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.169545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.183680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.183698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.197326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.197343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.211119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.211138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.224893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.224911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.239029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.239047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.252806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.252823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.140 [2024-10-14 17:29:41.267095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.140 [2024-10-14 17:29:41.267117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.280544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.280563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.294203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.294221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.308130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.308148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.321367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.321385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.335026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.335043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.348665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.348682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.362243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.362261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.376096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.376113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.389510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.389529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.403196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.403213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.417195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.417213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.427965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.427983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.442513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.442531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.456286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.456303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.470300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.470317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.483787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.483804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.497465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.497482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.511303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.511320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.524911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.524933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.399 [2024-10-14 17:29:41.538941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.399 [2024-10-14 17:29:41.538958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.552950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.552969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.566554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.566572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.579990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.580008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.593685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.593702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.607456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.607473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.621186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.621203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.635345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.635363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.649294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.649313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.662868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.662886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.676413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.676430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.690451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.690468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.704193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.704211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.713562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.713580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.727810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.727827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.741256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.741273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.755470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.755487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.766332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.766349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.780243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.780269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.658 [2024-10-14 17:29:41.793783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.658 [2024-10-14 17:29:41.793800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.807640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.807658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.821187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.821205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.835157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.835174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.848670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.848687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.862534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.862552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.876091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.876109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.889717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.889735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.903520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.920 [2024-10-14 17:29:41.903537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.920 [2024-10-14 17:29:41.917806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.917823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:41.926870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.926887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:41.941263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.941280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:41.955092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.955109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 16889.00 IOPS, 131.95 MiB/s [2024-10-14T15:29:42.059Z] [2024-10-14 17:29:41.968685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.968702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:41.982555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.982572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:41.996139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:41.996158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:42.009788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:42.009805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:42.023038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:42.023057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:42.036635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:42.036655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.921 [2024-10-14 17:29:42.050317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.921 [2024-10-14 17:29:42.050336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.064358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.064377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.077929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.077947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.091772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.091790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.105372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.105390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.119511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.119530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.133217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.133235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.146840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.146858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.160806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.160824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.174720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.174739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.188250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.188268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.202150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.202168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.215999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.216018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.229639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.229658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.243417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.243436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.257341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.257358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.271087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.271106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.284925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.284943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.298765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.298784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.179 [2024-10-14 17:29:42.312375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.179 [2024-10-14 17:29:42.312393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.326194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.326213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.340317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.340335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.353792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.353810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.367719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.367738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.381653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.381671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.395381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.395399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.409241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.409259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.422835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.422853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.437045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.437063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.448089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.448106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.457692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.457711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.472179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.472197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.485295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.485313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.494891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.494908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.508946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.508964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.522513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.522531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.536260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.536278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.549875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.549893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.563352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.563370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.438 [2024-10-14 17:29:42.577225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.438 [2024-10-14 17:29:42.577243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.591020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.591038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.604835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.604852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.618848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.618865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.632776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.632794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.646618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.646636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.660543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.660562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.674638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.674658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.688302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.688319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.702172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.702190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.716132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.716149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.730149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.730167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.743663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.743680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.757470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.757488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.771312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.771330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.785608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.785626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.796484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.796506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.810324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.810340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.697 [2024-10-14 17:29:42.824219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.697 [2024-10-14 17:29:42.824237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.837819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.837838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.851398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.851415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.865590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.865615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.879464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.879482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.893069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.893087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.906670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.906687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.920500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.920518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.934334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.934352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.947803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.947820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.961520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.961537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 16953.50 IOPS, 132.45 MiB/s [2024-10-14T15:29:43.095Z] [2024-10-14 17:29:42.975855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.975872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:42.992121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:42.992138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.003205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.003228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.012454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.012471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.026876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.026894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.040151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.040173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.054256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.054278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.068382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.068400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.081961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.081978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.957 [2024-10-14 17:29:43.095772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.957 [2024-10-14 17:29:43.095790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.109343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.109361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.123386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.123403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.137482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.137500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.151214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.151232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.165041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.165058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.178790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.178808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.192822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.192840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.207102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.207119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.220849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.220872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.234767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.234784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.248451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.248469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.262254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.262272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.275953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.275971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.289732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.289750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.303852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.303869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.317233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.217 [2024-10-14 17:29:43.317256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.217 [2024-10-14 17:29:43.330843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.218 [2024-10-14 17:29:43.330861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.218 [2024-10-14 17:29:43.344695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.218 [2024-10-14 17:29:43.344712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.476 [2024-10-14 17:29:43.358710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.476 [2024-10-14 17:29:43.358729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.476 [2024-10-14 17:29:43.372382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.476 [2024-10-14 17:29:43.372401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.476 [2024-10-14 17:29:43.386182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.386200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.400037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.400057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.414034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.414053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.427819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.427841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.441829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.441848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.455647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.455665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.469498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.469517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.483835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.483854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.498137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.498155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.514130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.514148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.527944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.527962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.541547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.541565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.556021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.556039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.571924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.571943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.586009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.586033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.599668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.599686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.477 [2024-10-14 17:29:43.614064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.477 [2024-10-14 17:29:43.614083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.629076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.629095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.643293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.643312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.657398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.657416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.671077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.671095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.685231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.685250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.696173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.696192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.710461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.710481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.724062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.724081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.738508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.738526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.749974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.749992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.764276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.764294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.773764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.773794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.788079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.788098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.801787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.801807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.815575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.815593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.829727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.829745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.736 [2024-10-14 17:29:43.843328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.736 [2024-10-14 17:29:43.843346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.737 [2024-10-14 17:29:43.857703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.737 [2024-10-14 17:29:43.857722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.737 [2024-10-14 17:29:43.869183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.737 [2024-10-14 17:29:43.869201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.883104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.883122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.896807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.896825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.911028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.911045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.924751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.924769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.938364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.938381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.952296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.952313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 16911.00 IOPS, 132.12 MiB/s [2024-10-14T15:29:44.134Z] [2024-10-14 17:29:43.965931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.965949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.979754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.979772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:43.993435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:43.993453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.007585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.007617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.021502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.021520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.035563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.035581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.049507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.049525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.063707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.063725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.077698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.077717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.091831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.091849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.106025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.106043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.116614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.116631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.996 [2024-10-14 17:29:44.130683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.996 [2024-10-14 17:29:44.130701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.144525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.144543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.158339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.158357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.172044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.172062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.185768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.185786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.199217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.199234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.213132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.213149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.227097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.227114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.241336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.241354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.257013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.257031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.270951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.270968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.285214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.285231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.299204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.299222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.313042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.313061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.326666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.326684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.340526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.340544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.354298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.354319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.368350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.368368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.382005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.382024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.255 [2024-10-14 17:29:44.395797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.255 [2024-10-14 17:29:44.395815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.409735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.409754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.423174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.423191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.437099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.437117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.450651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.450669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.464575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.464593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.478526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.478543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.492632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.514 [2024-10-14 17:29:44.492650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.514 [2024-10-14 17:29:44.503571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.503589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.512858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.512876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.527792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.527810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.538650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.538668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.547895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.547913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.562687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.562705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.576320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.576338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.590616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.590634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.604259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.604281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.618478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.618496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.632008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.632026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.515 [2024-10-14 17:29:44.645906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.515 [2024-10-14 17:29:44.645924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.659629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.659649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.669022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.669039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.683333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.683350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.697176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.697195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.711136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.711154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.725096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.725115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.738812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.738830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.752578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.752595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.766239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.766257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.780049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.780068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.794238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.794256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.805187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.805207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.774 [2024-10-14 17:29:44.819295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.774 [2024-10-14 17:29:44.819314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.832867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.832886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.846617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.846635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.860488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.860511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.874559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.874577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.888278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.888295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.775 [2024-10-14 17:29:44.902173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.775 [2024-10-14 17:29:44.902192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:44.916209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.916228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:44.929700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.929718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:44.944067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.944084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:44.959124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.959143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 16912.00 IOPS, 132.12 MiB/s [2024-10-14T15:29:45.172Z] [2024-10-14 17:29:44.972806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.972825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:44.986695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:44.986713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.000608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.000627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.015229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.015248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.026142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.026160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.041169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.041187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.055135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.055153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.069166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.069184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.083124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.083142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.097162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.097180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.111619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.111636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.122707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.122725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.137065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.137083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.150783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.150802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.034 [2024-10-14 17:29:45.164810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.034 [2024-10-14 17:29:45.164828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.178971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.178990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.192564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.192583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.206207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.206224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.220287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.220305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.234678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.234696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.248318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.248336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.262585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.262608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.276492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.276509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.290732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.290750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.304819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.304836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.318488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.318506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.332477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.332495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.346301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.346319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.360612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.360630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.374449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.374467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.388420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.388438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.401864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.401882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.415737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.415755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.293 [2024-10-14 17:29:45.429415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.293 [2024-10-14 17:29:45.429433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.443531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.443549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.457409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.457427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.471206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.471224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.484771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.484789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.498906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.498923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.513275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.513292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.523884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.523902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.533787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.533804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.548088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.548106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.561826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.561844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.575541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.575559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.589707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.589725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.603574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.603592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.617684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.617702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.631557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.631575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.645489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.645507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.659534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.659552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.673277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.673295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.553 [2024-10-14 17:29:45.687192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.553 [2024-10-14 17:29:45.687209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.701235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.701254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.714886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.714903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.729208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.729226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.743395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.743414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.757361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.757381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.770809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.770826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.785247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.785264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.796551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.796569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.810614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.810632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.824113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.824131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.838548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.838565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.852714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.852732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.866414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.866432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.880861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.880878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.894619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.894637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.908453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.908470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.922696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.922713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.933295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.933312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.812 [2024-10-14 17:29:45.947626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.812 [2024-10-14 17:29:45.947644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.070 [2024-10-14 17:29:45.961016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:45.961034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 16898.20 IOPS, 132.02 MiB/s [2024-10-14T15:29:46.209Z] [2024-10-14 17:29:45.974408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:45.974426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 00:12:47.071 Latency(us) 00:12:47.071 [2024-10-14T15:29:46.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.071 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:47.071 Nvme1n1 : 5.01 16896.99 132.01 0.00 0.00 7567.21 2995.93 17226.61 00:12:47.071 [2024-10-14T15:29:46.209Z] =================================================================================================================== 00:12:47.071 [2024-10-14T15:29:46.209Z] Total : 16896.99 132.01 0.00 0.00 7567.21 2995.93 17226.61 00:12:47.071 [2024-10-14 17:29:45.983512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:45.983528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:45.995537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:45.995550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.007581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.007603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.019607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.019623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.031639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.031652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.043665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.043678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.055695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.055709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.067726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.067759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.079760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.079775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.091785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.091799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.103823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.103833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.115863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.115874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 [2024-10-14 17:29:46.127896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.071 [2024-10-14 17:29:46.127904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (989516) - No such process 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 989516 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.071 delay0 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.071 17:29:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:47.329 [2024-10-14 17:29:46.306765] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:53.925 Initializing NVMe Controllers 00:12:53.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:53.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:53.925 Initialization complete. Launching workers. 00:12:53.925 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3374 00:12:53.925 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3656, failed to submit 38 00:12:53.925 success 3467, unsuccessful 189, failed 0 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.925 rmmod nvme_tcp 00:12:53.925 rmmod nvme_fabrics 00:12:53.925 rmmod nvme_keyring 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 987715 ']' 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 987715 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 987715 ']' 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 987715 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 987715 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 987715' 00:12:53.925 killing process with pid 987715 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 987715 00:12:53.925 17:29:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 987715 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:53.925 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:12:54.215 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.215 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.215 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.215 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.215 17:29:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:56.133 00:12:56.133 real 0m31.726s 00:12:56.133 user 0m42.284s 00:12:56.133 sys 0m11.344s 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:56.133 ************************************ 00:12:56.133 END TEST nvmf_zcopy 00:12:56.133 ************************************ 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:56.133 ************************************ 00:12:56.133 START TEST nvmf_nmic 00:12:56.133 ************************************ 00:12:56.133 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:56.393 * Looking for test storage... 00:12:56.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:56.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.393 --rc genhtml_branch_coverage=1 00:12:56.393 --rc genhtml_function_coverage=1 00:12:56.393 --rc genhtml_legend=1 00:12:56.393 --rc geninfo_all_blocks=1 00:12:56.393 --rc geninfo_unexecuted_blocks=1 00:12:56.393 00:12:56.393 ' 00:12:56.393 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:56.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.393 --rc genhtml_branch_coverage=1 00:12:56.393 --rc genhtml_function_coverage=1 00:12:56.393 --rc genhtml_legend=1 00:12:56.393 --rc geninfo_all_blocks=1 00:12:56.393 --rc geninfo_unexecuted_blocks=1 00:12:56.393 00:12:56.393 ' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:56.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.394 --rc genhtml_branch_coverage=1 00:12:56.394 --rc genhtml_function_coverage=1 00:12:56.394 --rc genhtml_legend=1 00:12:56.394 --rc geninfo_all_blocks=1 00:12:56.394 --rc geninfo_unexecuted_blocks=1 00:12:56.394 00:12:56.394 ' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:56.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.394 --rc genhtml_branch_coverage=1 00:12:56.394 --rc genhtml_function_coverage=1 00:12:56.394 --rc genhtml_legend=1 00:12:56.394 --rc geninfo_all_blocks=1 00:12:56.394 --rc geninfo_unexecuted_blocks=1 00:12:56.394 00:12:56.394 ' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.394 17:29:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:02.973 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:02.974 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:02.974 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:02.974 Found net devices under 0000:86:00.0: cvl_0_0 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:02.974 Found net devices under 0000:86:00.1: cvl_0_1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:13:02.974 00:13:02.974 --- 10.0.0.2 ping statistics --- 00:13:02.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.974 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:02.974 00:13:02.974 --- 10.0.0.1 ping statistics --- 00:13:02.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.974 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=995193 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 995193 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 995193 ']' 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.974 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 [2024-10-14 17:30:01.469844] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:13:02.975 [2024-10-14 17:30:01.469893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.975 [2024-10-14 17:30:01.542305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.975 [2024-10-14 17:30:01.586667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.975 [2024-10-14 17:30:01.586704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.975 [2024-10-14 17:30:01.586711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.975 [2024-10-14 17:30:01.586717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.975 [2024-10-14 17:30:01.586722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.975 [2024-10-14 17:30:01.588354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.975 [2024-10-14 17:30:01.588462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.975 [2024-10-14 17:30:01.588568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.975 [2024-10-14 17:30:01.588569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 [2024-10-14 17:30:01.729642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 Malloc0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 [2024-10-14 17:30:01.792779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:02.975 test case1: single bdev can't be used in multiple subsystems 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 [2024-10-14 17:30:01.820678] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:02.975 [2024-10-14 17:30:01.820698] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:02.975 [2024-10-14 17:30:01.820706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:02.975 request: 00:13:02.975 { 00:13:02.975 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:02.975 "namespace": { 00:13:02.975 "bdev_name": "Malloc0", 00:13:02.975 "no_auto_visible": false 00:13:02.975 }, 00:13:02.975 "method": "nvmf_subsystem_add_ns", 00:13:02.975 "req_id": 1 00:13:02.975 } 00:13:02.975 Got JSON-RPC error response 00:13:02.975 response: 00:13:02.975 { 00:13:02.975 "code": -32602, 00:13:02.975 "message": "Invalid parameters" 00:13:02.975 } 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:02.975 Adding namespace failed - expected result. 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:02.975 test case2: host connect to nvmf target in multiple paths 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.975 [2024-10-14 17:30:01.832824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.975 17:30:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.911 17:30:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:05.287 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.287 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.287 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.287 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:05.287 17:30:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:07.192 17:30:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:07.192 [global] 00:13:07.192 thread=1 00:13:07.192 invalidate=1 00:13:07.192 rw=write 00:13:07.192 time_based=1 00:13:07.192 runtime=1 00:13:07.192 ioengine=libaio 00:13:07.192 direct=1 00:13:07.192 bs=4096 00:13:07.192 iodepth=1 00:13:07.192 norandommap=0 00:13:07.192 numjobs=1 00:13:07.192 00:13:07.192 verify_dump=1 00:13:07.192 verify_backlog=512 00:13:07.192 verify_state_save=0 00:13:07.192 do_verify=1 00:13:07.192 verify=crc32c-intel 00:13:07.192 [job0] 00:13:07.192 filename=/dev/nvme0n1 00:13:07.192 Could not set queue depth (nvme0n1) 00:13:07.451 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:07.451 fio-3.35 00:13:07.451 Starting 1 thread 00:13:08.828 00:13:08.828 job0: (groupid=0, jobs=1): err= 0: pid=996307: Mon Oct 14 17:30:07 2024 00:13:08.828 read: IOPS=1979, BW=7916KiB/s (8106kB/s)(7924KiB/1001msec) 00:13:08.828 slat (nsec): min=6073, max=28702, avg=7116.33, stdev=1370.82 00:13:08.828 clat (usec): min=138, max=41097, avg=351.99, stdev=2584.06 00:13:08.828 lat (usec): min=144, max=41119, avg=359.11, stdev=2584.93 00:13:08.828 clat percentiles (usec): 00:13:08.828 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:13:08.828 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 180], 00:13:08.828 | 70.00th=[ 198], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 269], 00:13:08.828 | 99.00th=[ 285], 99.50th=[ 392], 99.90th=[41157], 99.95th=[41157], 00:13:08.828 | 99.99th=[41157] 00:13:08.828 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:08.828 slat (nsec): min=8914, max=44258, avg=9879.46, stdev=1503.53 00:13:08.828 clat (usec): min=98, max=2655, avg=126.82, stdev=59.32 00:13:08.828 lat (usec): min=113, max=2665, avg=136.70, stdev=59.38 00:13:08.828 clat percentiles (usec): 00:13:08.828 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 114], 00:13:08.828 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:13:08.828 | 70.00th=[ 124], 80.00th=[ 137], 90.00th=[ 157], 95.00th=[ 163], 00:13:08.828 | 99.00th=[ 198], 99.50th=[ 239], 99.90th=[ 249], 99.95th=[ 251], 00:13:08.828 | 99.99th=[ 2671] 00:13:08.828 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:13:08.828 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:08.828 lat (usec) : 100=0.02%, 250=96.13%, 500=3.62% 00:13:08.828 lat (msec) : 4=0.02%, 50=0.20% 00:13:08.828 cpu : usr=2.00%, sys=3.50%, ctx=4029, majf=0, minf=1 00:13:08.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:08.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.828 issued rwts: total=1981,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:08.828 00:13:08.828 Run status group 0 (all jobs): 00:13:08.828 READ: bw=7916KiB/s (8106kB/s), 7916KiB/s-7916KiB/s (8106kB/s-8106kB/s), io=7924KiB (8114kB), run=1001-1001msec 00:13:08.828 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:13:08.828 00:13:08.828 Disk stats (read/write): 00:13:08.828 nvme0n1: ios=1637/2048, merge=0/0, ticks=900/241, in_queue=1141, util=99.50% 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.828 rmmod nvme_tcp 00:13:08.828 rmmod nvme_fabrics 00:13:08.828 rmmod nvme_keyring 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.828 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 995193 ']' 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 995193 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 995193 ']' 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 995193 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 995193 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 995193' 00:13:08.829 killing process with pid 995193 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 995193 00:13:08.829 17:30:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 995193 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.087 17:30:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.991 00:13:10.991 real 0m14.886s 00:13:10.991 user 0m32.443s 00:13:10.991 sys 0m5.408s 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.991 ************************************ 00:13:10.991 END TEST nvmf_nmic 00:13:10.991 ************************************ 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.991 17:30:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:11.251 ************************************ 00:13:11.251 START TEST nvmf_fio_target 00:13:11.251 ************************************ 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:11.251 * Looking for test storage... 00:13:11.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.251 --rc genhtml_branch_coverage=1 00:13:11.251 --rc genhtml_function_coverage=1 00:13:11.251 --rc genhtml_legend=1 00:13:11.251 --rc geninfo_all_blocks=1 00:13:11.251 --rc geninfo_unexecuted_blocks=1 00:13:11.251 00:13:11.251 ' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.251 --rc genhtml_branch_coverage=1 00:13:11.251 --rc genhtml_function_coverage=1 00:13:11.251 --rc genhtml_legend=1 00:13:11.251 --rc geninfo_all_blocks=1 00:13:11.251 --rc geninfo_unexecuted_blocks=1 00:13:11.251 00:13:11.251 ' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.251 --rc genhtml_branch_coverage=1 00:13:11.251 --rc genhtml_function_coverage=1 00:13:11.251 --rc genhtml_legend=1 00:13:11.251 --rc geninfo_all_blocks=1 00:13:11.251 --rc geninfo_unexecuted_blocks=1 00:13:11.251 00:13:11.251 ' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.251 --rc genhtml_branch_coverage=1 00:13:11.251 --rc genhtml_function_coverage=1 00:13:11.251 --rc genhtml_legend=1 00:13:11.251 --rc geninfo_all_blocks=1 00:13:11.251 --rc geninfo_unexecuted_blocks=1 00:13:11.251 00:13:11.251 ' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:11.251 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.252 17:30:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:17.825 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:17.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:17.825 Found net devices under 0000:86:00.0: cvl_0_0 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:17.825 Found net devices under 0000:86:00.1: cvl_0_1 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:17.825 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:13:17.826 00:13:17.826 --- 10.0.0.2 ping statistics --- 00:13:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.826 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:13:17.826 00:13:17.826 --- 10.0.0.1 ping statistics --- 00:13:17.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.826 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1000459 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1000459 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1000459 ']' 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.826 [2024-10-14 17:30:16.397589] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:13:17.826 [2024-10-14 17:30:16.397654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.826 [2024-10-14 17:30:16.471050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.826 [2024-10-14 17:30:16.510577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.826 [2024-10-14 17:30:16.510622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.826 [2024-10-14 17:30:16.510629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.826 [2024-10-14 17:30:16.510635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.826 [2024-10-14 17:30:16.510639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.826 [2024-10-14 17:30:16.512221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.826 [2024-10-14 17:30:16.512333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.826 [2024-10-14 17:30:16.512438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.826 [2024-10-14 17:30:16.512438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:17.826 [2024-10-14 17:30:16.817863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.826 17:30:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.085 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:18.085 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.344 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:18.344 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.603 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:18.603 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.603 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:18.603 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:18.862 17:30:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.120 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:19.120 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.379 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:19.379 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.638 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:19.638 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:19.638 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.897 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.897 17:30:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.156 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:20.156 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.415 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.415 [2024-10-14 17:30:19.486518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.415 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:20.674 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:20.932 17:30:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:22.306 17:30:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:24.217 17:30:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:24.217 [global] 00:13:24.217 thread=1 00:13:24.217 invalidate=1 00:13:24.217 rw=write 00:13:24.217 time_based=1 00:13:24.217 runtime=1 00:13:24.217 ioengine=libaio 00:13:24.217 direct=1 00:13:24.217 bs=4096 00:13:24.217 iodepth=1 00:13:24.217 norandommap=0 00:13:24.217 numjobs=1 00:13:24.217 00:13:24.217 verify_dump=1 00:13:24.217 verify_backlog=512 00:13:24.217 verify_state_save=0 00:13:24.217 do_verify=1 00:13:24.217 verify=crc32c-intel 00:13:24.217 [job0] 00:13:24.217 filename=/dev/nvme0n1 00:13:24.217 [job1] 00:13:24.217 filename=/dev/nvme0n2 00:13:24.217 [job2] 00:13:24.217 filename=/dev/nvme0n3 00:13:24.217 [job3] 00:13:24.217 filename=/dev/nvme0n4 00:13:24.217 Could not set queue depth (nvme0n1) 00:13:24.217 Could not set queue depth (nvme0n2) 00:13:24.217 Could not set queue depth (nvme0n3) 00:13:24.217 Could not set queue depth (nvme0n4) 00:13:24.476 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.476 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.476 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.476 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.476 fio-3.35 00:13:24.476 Starting 4 threads 00:13:25.878 00:13:25.878 job0: (groupid=0, jobs=1): err= 0: pid=1001811: Mon Oct 14 17:30:24 2024 00:13:25.878 read: IOPS=518, BW=2072KiB/s (2122kB/s)(2120KiB/1023msec) 00:13:25.878 slat (nsec): min=6933, max=28884, avg=8328.89, stdev=2957.15 00:13:25.878 clat (usec): min=155, max=42095, avg=1583.84, stdev=7420.51 00:13:25.878 lat (usec): min=163, max=42119, avg=1592.17, stdev=7423.14 00:13:25.878 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:13:25.879 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:13:25.879 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 247], 00:13:25.879 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.879 | 99.99th=[42206] 00:13:25.879 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:13:25.879 slat (nsec): min=10429, max=44322, avg=11543.27, stdev=1633.70 00:13:25.879 clat (usec): min=102, max=309, avg=157.43, stdev=29.03 00:13:25.879 lat (usec): min=113, max=320, avg=168.97, stdev=29.25 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 127], 00:13:25.879 | 30.00th=[ 135], 40.00th=[ 145], 50.00th=[ 159], 60.00th=[ 172], 00:13:25.879 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:13:25.879 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 245], 99.95th=[ 310], 00:13:25.879 | 99.99th=[ 310] 00:13:25.879 bw ( KiB/s): min= 8192, max= 8192, per=45.47%, avg=8192.00, stdev= 0.00, samples=1 00:13:25.879 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:25.879 lat (usec) : 250=98.52%, 500=0.32% 00:13:25.879 lat (msec) : 50=1.16% 00:13:25.879 cpu : usr=1.17%, sys=1.17%, ctx=1555, majf=0, minf=1 00:13:25.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.879 job1: (groupid=0, jobs=1): err= 0: pid=1001812: Mon Oct 14 17:30:24 2024 00:13:25.879 read: IOPS=2351, BW=9407KiB/s (9632kB/s)(9416KiB/1001msec) 00:13:25.879 slat (nsec): min=6255, max=27650, avg=7214.27, stdev=1015.58 00:13:25.879 clat (usec): min=153, max=41274, avg=249.99, stdev=1427.40 00:13:25.879 lat (usec): min=160, max=41284, avg=257.21, stdev=1427.79 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:13:25.879 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:13:25.879 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 239], 00:13:25.879 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[38536], 99.95th=[41157], 00:13:25.879 | 99.99th=[41157] 00:13:25.879 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:25.879 slat (nsec): min=9182, max=41123, avg=10218.40, stdev=1288.67 00:13:25.879 clat (usec): min=105, max=303, avg=139.09, stdev=38.37 00:13:25.879 lat (usec): min=114, max=313, avg=149.31, stdev=38.57 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 117], 00:13:25.879 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 127], 00:13:25.879 | 70.00th=[ 133], 80.00th=[ 147], 90.00th=[ 200], 95.00th=[ 243], 00:13:25.879 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 265], 00:13:25.879 | 99.99th=[ 306] 00:13:25.879 bw ( KiB/s): min= 8192, max= 8192, per=45.47%, avg=8192.00, stdev= 0.00, samples=1 00:13:25.879 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:25.879 lat (usec) : 250=98.68%, 500=1.26% 00:13:25.879 lat (msec) : 50=0.06% 00:13:25.879 cpu : usr=2.30%, sys=4.60%, ctx=4916, majf=0, minf=1 00:13:25.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 issued rwts: total=2354,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.879 job2: (groupid=0, jobs=1): err= 0: pid=1001813: Mon Oct 14 17:30:24 2024 00:13:25.879 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:13:25.879 slat (nsec): min=12073, max=27923, avg=22547.09, stdev=3770.95 00:13:25.879 clat (usec): min=316, max=41992, avg=39251.26, stdev=8490.36 00:13:25.879 lat (usec): min=330, max=42019, avg=39273.81, stdev=8492.41 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 318], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:25.879 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.879 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:25.879 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.879 | 99.99th=[42206] 00:13:25.879 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:13:25.879 slat (nsec): min=11544, max=46266, avg=13447.31, stdev=2377.32 00:13:25.879 clat (usec): min=143, max=299, avg=180.54, stdev=18.07 00:13:25.879 lat (usec): min=156, max=345, avg=193.99, stdev=18.57 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:13:25.879 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:13:25.879 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:13:25.879 | 99.00th=[ 253], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 302], 00:13:25.879 | 99.99th=[ 302] 00:13:25.879 bw ( KiB/s): min= 4096, max= 4096, per=22.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.879 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.879 lat (usec) : 250=94.58%, 500=1.31% 00:13:25.879 lat (msec) : 50=4.11% 00:13:25.879 cpu : usr=0.40%, sys=1.10%, ctx=536, majf=0, minf=1 00:13:25.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.879 job3: (groupid=0, jobs=1): err= 0: pid=1001814: Mon Oct 14 17:30:24 2024 00:13:25.879 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:13:25.879 slat (nsec): min=11069, max=25328, avg=19835.64, stdev=4481.49 00:13:25.879 clat (usec): min=40879, max=41989, avg=41133.05, stdev=363.46 00:13:25.879 lat (usec): min=40902, max=42008, avg=41152.89, stdev=363.61 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:25.879 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.879 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:25.879 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:25.879 | 99.99th=[42206] 00:13:25.879 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:25.879 slat (nsec): min=11176, max=46277, avg=13854.26, stdev=2453.88 00:13:25.879 clat (usec): min=140, max=374, avg=193.14, stdev=28.90 00:13:25.879 lat (usec): min=153, max=420, avg=206.99, stdev=29.37 00:13:25.879 clat percentiles (usec): 00:13:25.879 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 172], 00:13:25.879 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 196], 00:13:25.879 | 70.00th=[ 206], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 241], 00:13:25.879 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 375], 99.95th=[ 375], 00:13:25.879 | 99.99th=[ 375] 00:13:25.879 bw ( KiB/s): min= 4096, max= 4096, per=22.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.879 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.879 lat (usec) : 250=94.19%, 500=1.69% 00:13:25.879 lat (msec) : 50=4.12% 00:13:25.879 cpu : usr=0.20%, sys=0.89%, ctx=536, majf=0, minf=1 00:13:25.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.879 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.879 00:13:25.879 Run status group 0 (all jobs): 00:13:25.879 READ: bw=11.2MiB/s (11.7MB/s), 86.8KiB/s-9407KiB/s (88.9kB/s-9632kB/s), io=11.4MiB (12.0MB), run=1001-1023msec 00:13:25.879 WRITE: bw=17.6MiB/s (18.5MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1023msec 00:13:25.879 00:13:25.879 Disk stats (read/write): 00:13:25.879 nvme0n1: ios=576/1024, merge=0/0, ticks=936/155, in_queue=1091, util=86.67% 00:13:25.879 nvme0n2: ios=2039/2048, merge=0/0, ticks=567/282, in_queue=849, util=91.07% 00:13:25.879 nvme0n3: ios=76/512, merge=0/0, ticks=816/88, in_queue=904, util=94.69% 00:13:25.879 nvme0n4: ios=40/512, merge=0/0, ticks=1609/95, in_queue=1704, util=94.23% 00:13:25.879 17:30:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:25.879 [global] 00:13:25.879 thread=1 00:13:25.879 invalidate=1 00:13:25.879 rw=randwrite 00:13:25.879 time_based=1 00:13:25.879 runtime=1 00:13:25.879 ioengine=libaio 00:13:25.879 direct=1 00:13:25.879 bs=4096 00:13:25.879 iodepth=1 00:13:25.879 norandommap=0 00:13:25.879 numjobs=1 00:13:25.879 00:13:25.879 verify_dump=1 00:13:25.879 verify_backlog=512 00:13:25.879 verify_state_save=0 00:13:25.879 do_verify=1 00:13:25.879 verify=crc32c-intel 00:13:25.879 [job0] 00:13:25.879 filename=/dev/nvme0n1 00:13:25.879 [job1] 00:13:25.879 filename=/dev/nvme0n2 00:13:25.879 [job2] 00:13:25.879 filename=/dev/nvme0n3 00:13:25.879 [job3] 00:13:25.879 filename=/dev/nvme0n4 00:13:25.879 Could not set queue depth (nvme0n1) 00:13:25.879 Could not set queue depth (nvme0n2) 00:13:25.879 Could not set queue depth (nvme0n3) 00:13:25.879 Could not set queue depth (nvme0n4) 00:13:26.145 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.145 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.145 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.145 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.145 fio-3.35 00:13:26.145 Starting 4 threads 00:13:27.523 00:13:27.523 job0: (groupid=0, jobs=1): err= 0: pid=1002188: Mon Oct 14 17:30:26 2024 00:13:27.523 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:13:27.523 slat (nsec): min=8944, max=25944, avg=21567.46, stdev=4068.36 00:13:27.523 clat (usec): min=221, max=42069, avg=39373.07, stdev=8347.74 00:13:27.523 lat (usec): min=244, max=42091, avg=39394.63, stdev=8347.57 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 223], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:27.523 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:27.523 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:27.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:27.523 | 99.99th=[42206] 00:13:27.523 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:27.523 slat (nsec): min=9284, max=42440, avg=10388.38, stdev=1611.73 00:13:27.523 clat (usec): min=126, max=293, avg=172.56, stdev=16.73 00:13:27.523 lat (usec): min=136, max=304, avg=182.94, stdev=16.95 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:13:27.523 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:13:27.523 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 202], 00:13:27.523 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 293], 99.95th=[ 293], 00:13:27.523 | 99.99th=[ 293] 00:13:27.523 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.523 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.523 lat (usec) : 250=95.52%, 500=0.19% 00:13:27.523 lat (msec) : 50=4.29% 00:13:27.523 cpu : usr=0.10%, sys=0.67%, ctx=537, majf=0, minf=1 00:13:27.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.523 job1: (groupid=0, jobs=1): err= 0: pid=1002189: Mon Oct 14 17:30:26 2024 00:13:27.523 read: IOPS=510, BW=2041KiB/s (2090kB/s)(2108KiB/1033msec) 00:13:27.523 slat (nsec): min=7354, max=44166, avg=8794.37, stdev=3279.42 00:13:27.523 clat (usec): min=169, max=42008, avg=1599.94, stdev=7426.93 00:13:27.523 lat (usec): min=177, max=42031, avg=1608.73, stdev=7429.45 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 190], 00:13:27.523 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:13:27.523 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 249], 00:13:27.523 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:27.523 | 99.99th=[42206] 00:13:27.523 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:13:27.523 slat (nsec): min=10142, max=38342, avg=11425.18, stdev=1808.83 00:13:27.523 clat (usec): min=109, max=297, avg=164.36, stdev=35.97 00:13:27.523 lat (usec): min=121, max=332, avg=175.78, stdev=36.22 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 137], 00:13:27.523 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:13:27.523 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 219], 95.00th=[ 249], 00:13:27.523 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 297], 00:13:27.523 | 99.99th=[ 297] 00:13:27.523 bw ( KiB/s): min= 8192, max= 8192, per=59.49%, avg=8192.00, stdev= 0.00, samples=1 00:13:27.523 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:27.523 lat (usec) : 250=95.10%, 500=3.74% 00:13:27.523 lat (msec) : 50=1.16% 00:13:27.523 cpu : usr=1.07%, sys=1.84%, ctx=1552, majf=0, minf=1 00:13:27.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.523 job2: (groupid=0, jobs=1): err= 0: pid=1002190: Mon Oct 14 17:30:26 2024 00:13:27.523 read: IOPS=1017, BW=4071KiB/s (4168kB/s)(4144KiB/1018msec) 00:13:27.523 slat (nsec): min=6634, max=30801, avg=7759.05, stdev=2151.40 00:13:27.523 clat (usec): min=165, max=42255, avg=717.95, stdev=4573.45 00:13:27.523 lat (usec): min=172, max=42263, avg=725.71, stdev=4574.88 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:13:27.523 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:13:27.523 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 243], 00:13:27.523 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:27.523 | 99.99th=[42206] 00:13:27.523 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:13:27.523 slat (nsec): min=9122, max=40122, avg=10266.89, stdev=1658.87 00:13:27.523 clat (usec): min=111, max=275, avg=158.97, stdev=32.36 00:13:27.523 lat (usec): min=121, max=313, avg=169.24, stdev=32.60 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 130], 00:13:27.523 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 153], 60.00th=[ 167], 00:13:27.523 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 227], 00:13:27.523 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 260], 99.95th=[ 277], 00:13:27.523 | 99.99th=[ 277] 00:13:27.523 bw ( KiB/s): min= 1208, max=11080, per=44.61%, avg=6144.00, stdev=6980.56, samples=2 00:13:27.523 iops : min= 302, max= 2770, avg=1536.00, stdev=1745.14, samples=2 00:13:27.523 lat (usec) : 250=98.09%, 500=1.40% 00:13:27.523 lat (msec) : 50=0.51% 00:13:27.523 cpu : usr=1.57%, sys=2.06%, ctx=2572, majf=0, minf=1 00:13:27.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.523 job3: (groupid=0, jobs=1): err= 0: pid=1002191: Mon Oct 14 17:30:26 2024 00:13:27.523 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:13:27.523 slat (nsec): min=10118, max=26709, avg=21456.36, stdev=3533.56 00:13:27.523 clat (usec): min=40842, max=41051, avg=40960.91, stdev=52.71 00:13:27.523 lat (usec): min=40852, max=41073, avg=40982.37, stdev=53.61 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:27.523 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:27.523 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:27.523 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:27.523 | 99.99th=[41157] 00:13:27.523 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:27.523 slat (nsec): min=10156, max=38640, avg=12645.98, stdev=2154.47 00:13:27.523 clat (usec): min=143, max=306, avg=192.65, stdev=19.88 00:13:27.523 lat (usec): min=156, max=331, avg=205.30, stdev=20.49 00:13:27.523 clat percentiles (usec): 00:13:27.523 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 178], 00:13:27.523 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:13:27.523 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:13:27.523 | 99.00th=[ 243], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 306], 00:13:27.523 | 99.99th=[ 306] 00:13:27.523 bw ( KiB/s): min= 4096, max= 4096, per=29.74%, avg=4096.00, stdev= 0.00, samples=1 00:13:27.523 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:27.523 lat (usec) : 250=94.94%, 500=0.94% 00:13:27.523 lat (msec) : 50=4.12% 00:13:27.523 cpu : usr=0.69%, sys=0.69%, ctx=534, majf=0, minf=2 00:13:27.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:27.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.523 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:27.523 00:13:27.523 Run status group 0 (all jobs): 00:13:27.524 READ: bw=6183KiB/s (6331kB/s), 87.2KiB/s-4071KiB/s (89.3kB/s-4168kB/s), io=6436KiB (6590kB), run=1009-1041msec 00:13:27.524 WRITE: bw=13.4MiB/s (14.1MB/s), 1967KiB/s-6035KiB/s (2015kB/s-6180kB/s), io=14.0MiB (14.7MB), run=1009-1041msec 00:13:27.524 00:13:27.524 Disk stats (read/write): 00:13:27.524 nvme0n1: ios=59/512, merge=0/0, ticks=1290/89, in_queue=1379, util=98.19% 00:13:27.524 nvme0n2: ios=545/1024, merge=0/0, ticks=1541/155, in_queue=1696, util=91.36% 00:13:27.524 nvme0n3: ios=1089/1536, merge=0/0, ticks=640/240, in_queue=880, util=90.75% 00:13:27.524 nvme0n4: ios=75/512, merge=0/0, ticks=813/95, in_queue=908, util=95.39% 00:13:27.524 17:30:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:27.524 [global] 00:13:27.524 thread=1 00:13:27.524 invalidate=1 00:13:27.524 rw=write 00:13:27.524 time_based=1 00:13:27.524 runtime=1 00:13:27.524 ioengine=libaio 00:13:27.524 direct=1 00:13:27.524 bs=4096 00:13:27.524 iodepth=128 00:13:27.524 norandommap=0 00:13:27.524 numjobs=1 00:13:27.524 00:13:27.524 verify_dump=1 00:13:27.524 verify_backlog=512 00:13:27.524 verify_state_save=0 00:13:27.524 do_verify=1 00:13:27.524 verify=crc32c-intel 00:13:27.524 [job0] 00:13:27.524 filename=/dev/nvme0n1 00:13:27.524 [job1] 00:13:27.524 filename=/dev/nvme0n2 00:13:27.524 [job2] 00:13:27.524 filename=/dev/nvme0n3 00:13:27.524 [job3] 00:13:27.524 filename=/dev/nvme0n4 00:13:27.524 Could not set queue depth (nvme0n1) 00:13:27.524 Could not set queue depth (nvme0n2) 00:13:27.524 Could not set queue depth (nvme0n3) 00:13:27.524 Could not set queue depth (nvme0n4) 00:13:27.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.524 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.524 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.524 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.524 fio-3.35 00:13:27.524 Starting 4 threads 00:13:28.901 00:13:28.901 job0: (groupid=0, jobs=1): err= 0: pid=1002560: Mon Oct 14 17:30:27 2024 00:13:28.901 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:13:28.901 slat (nsec): min=1332, max=10881k, avg=103696.64, stdev=741181.97 00:13:28.901 clat (usec): min=4291, max=22663, avg=12755.49, stdev=3346.87 00:13:28.901 lat (usec): min=4297, max=22679, avg=12859.19, stdev=3395.82 00:13:28.901 clat percentiles (usec): 00:13:28.901 | 1.00th=[ 4621], 5.00th=[ 7767], 10.00th=[ 9896], 20.00th=[11338], 00:13:28.901 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:13:28.901 | 70.00th=[12518], 80.00th=[14353], 90.00th=[18482], 95.00th=[20055], 00:13:28.901 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:13:28.901 | 99.99th=[22676] 00:13:28.901 write: IOPS=5352, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1010msec); 0 zone resets 00:13:28.901 slat (usec): min=2, max=24670, avg=79.60, stdev=439.80 00:13:28.901 clat (usec): min=2722, max=22507, avg=11047.36, stdev=2614.62 00:13:28.901 lat (usec): min=2732, max=32061, avg=11126.96, stdev=2650.34 00:13:28.901 clat percentiles (usec): 00:13:28.901 | 1.00th=[ 3589], 5.00th=[ 5473], 10.00th=[ 7439], 20.00th=[ 9110], 00:13:28.901 | 30.00th=[10945], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:13:28.901 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[13698], 00:13:28.901 | 99.00th=[18482], 99.50th=[20841], 99.90th=[22152], 99.95th=[22414], 00:13:28.901 | 99.99th=[22414] 00:13:28.901 bw ( KiB/s): min=20784, max=21448, per=26.35%, avg=21116.00, stdev=469.52, samples=2 00:13:28.901 iops : min= 5196, max= 5362, avg=5279.00, stdev=117.38, samples=2 00:13:28.901 lat (msec) : 4=1.00%, 10=17.19%, 20=79.23%, 50=2.58% 00:13:28.901 cpu : usr=3.77%, sys=5.85%, ctx=685, majf=0, minf=1 00:13:28.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:28.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.901 issued rwts: total=5120,5406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.901 job1: (groupid=0, jobs=1): err= 0: pid=1002561: Mon Oct 14 17:30:27 2024 00:13:28.901 read: IOPS=5285, BW=20.6MiB/s (21.7MB/s)(20.8MiB/1008msec) 00:13:28.901 slat (nsec): min=1254, max=11605k, avg=105372.58, stdev=768265.94 00:13:28.901 clat (usec): min=3005, max=32954, avg=12679.06, stdev=3686.37 00:13:28.901 lat (usec): min=3015, max=32956, avg=12784.43, stdev=3739.50 00:13:28.901 clat percentiles (usec): 00:13:28.901 | 1.00th=[ 5014], 5.00th=[ 7373], 10.00th=[ 8291], 20.00th=[10683], 00:13:28.901 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:13:28.901 | 70.00th=[12649], 80.00th=[15270], 90.00th=[18482], 95.00th=[20055], 00:13:28.901 | 99.00th=[21890], 99.50th=[22676], 99.90th=[32375], 99.95th=[32900], 00:13:28.901 | 99.99th=[32900] 00:13:28.901 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:13:28.901 slat (usec): min=2, max=6705, avg=72.59, stdev=261.86 00:13:28.901 clat (usec): min=1584, max=32955, avg=10699.06, stdev=3117.44 00:13:28.901 lat (usec): min=1596, max=32959, avg=10771.65, stdev=3137.08 00:13:28.901 clat percentiles (usec): 00:13:28.901 | 1.00th=[ 3359], 5.00th=[ 4752], 10.00th=[ 6456], 20.00th=[ 8225], 00:13:28.901 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11863], 60.00th=[11994], 00:13:28.901 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12256], 95.00th=[12387], 00:13:28.901 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:13:28.901 | 99.99th=[32900] 00:13:28.901 bw ( KiB/s): min=20496, max=24560, per=28.11%, avg=22528.00, stdev=2873.68, samples=2 00:13:28.901 iops : min= 5124, max= 6140, avg=5632.00, stdev=718.42, samples=2 00:13:28.901 lat (msec) : 2=0.07%, 4=1.24%, 10=21.08%, 20=73.88%, 50=3.73% 00:13:28.901 cpu : usr=4.67%, sys=4.77%, ctx=719, majf=0, minf=1 00:13:28.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:28.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.901 issued rwts: total=5328,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.901 job2: (groupid=0, jobs=1): err= 0: pid=1002562: Mon Oct 14 17:30:27 2024 00:13:28.901 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1011msec) 00:13:28.901 slat (nsec): min=1407, max=12892k, avg=117601.16, stdev=859764.47 00:13:28.901 clat (usec): min=4731, max=26667, avg=14441.75, stdev=3413.65 00:13:28.901 lat (usec): min=4742, max=26681, avg=14559.35, stdev=3462.46 00:13:28.901 clat percentiles (usec): 00:13:28.901 | 1.00th=[ 5735], 5.00th=[10552], 10.00th=[11994], 20.00th=[13042], 00:13:28.901 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:13:28.901 | 70.00th=[14091], 80.00th=[15270], 90.00th=[20317], 95.00th=[22152], 00:13:28.901 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822], 00:13:28.901 | 99.99th=[26608] 00:13:28.901 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:13:28.901 slat (usec): min=2, max=41533, avg=104.28, stdev=893.10 00:13:28.901 clat (usec): min=1764, max=100428, avg=12692.10, stdev=5168.54 00:13:28.901 lat (usec): min=1777, max=100442, avg=12796.38, stdev=5331.14 00:13:28.901 clat percentiles (msec): 00:13:28.901 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:13:28.901 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:13:28.901 | 70.00th=[ 14], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 18], 00:13:28.902 | 99.00th=[ 24], 99.50th=[ 29], 99.90th=[ 101], 99.95th=[ 101], 00:13:28.902 | 99.99th=[ 101] 00:13:28.902 bw ( KiB/s): min=16384, max=19832, per=22.60%, avg=18108.00, stdev=2438.10, samples=2 00:13:28.902 iops : min= 4096, max= 4958, avg=4527.00, stdev=609.53, samples=2 00:13:28.902 lat (msec) : 2=0.11%, 4=0.55%, 10=9.87%, 20=83.18%, 50=6.10% 00:13:28.902 lat (msec) : 100=0.10%, 250=0.08% 00:13:28.902 cpu : usr=3.56%, sys=3.96%, ctx=531, majf=0, minf=1 00:13:28.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:28.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.902 issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.902 job3: (groupid=0, jobs=1): err= 0: pid=1002563: Mon Oct 14 17:30:27 2024 00:13:28.902 read: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:13:28.902 slat (nsec): min=1218, max=16539k, avg=102316.98, stdev=699501.82 00:13:28.902 clat (usec): min=777, max=37120, avg=13930.79, stdev=3774.82 00:13:28.902 lat (usec): min=4651, max=37142, avg=14033.11, stdev=3815.06 00:13:28.902 clat percentiles (usec): 00:13:28.902 | 1.00th=[ 5080], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[12649], 00:13:28.902 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:13:28.902 | 70.00th=[13960], 80.00th=[14484], 90.00th=[17695], 95.00th=[21103], 00:13:28.902 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:13:28.902 | 99.99th=[36963] 00:13:28.902 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:28.902 slat (nsec): min=1892, max=18354k, avg=101036.70, stdev=530313.39 00:13:28.902 clat (usec): min=1148, max=51699, avg=14211.67, stdev=5244.23 00:13:28.902 lat (usec): min=1159, max=51709, avg=14312.71, stdev=5270.49 00:13:28.902 clat percentiles (usec): 00:13:28.902 | 1.00th=[ 5211], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[12387], 00:13:28.902 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:13:28.902 | 70.00th=[13829], 80.00th=[14615], 90.00th=[16909], 95.00th=[23987], 00:13:28.902 | 99.00th=[35390], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:13:28.902 | 99.99th=[51643] 00:13:28.902 bw ( KiB/s): min=17168, max=17168, per=21.42%, avg=17168.00, stdev= 0.00, samples=1 00:13:28.902 iops : min= 4292, max= 4292, avg=4292.00, stdev= 0.00, samples=1 00:13:28.902 lat (usec) : 1000=0.01% 00:13:28.902 lat (msec) : 2=0.03%, 4=0.38%, 10=9.30%, 20=83.05%, 50=7.21% 00:13:28.902 lat (msec) : 100=0.01% 00:13:28.902 cpu : usr=2.90%, sys=4.90%, ctx=569, majf=0, minf=1 00:13:28.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:28.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:28.902 issued rwts: total=4390,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:28.902 00:13:28.902 Run status group 0 (all jobs): 00:13:28.902 READ: bw=73.3MiB/s (76.9MB/s), 16.0MiB/s-20.6MiB/s (16.8MB/s-21.7MB/s), io=74.1MiB (77.7MB), run=1002-1011msec 00:13:28.902 WRITE: bw=78.3MiB/s (82.1MB/s), 17.8MiB/s-21.8MiB/s (18.7MB/s-22.9MB/s), io=79.1MiB (83.0MB), run=1002-1011msec 00:13:28.902 00:13:28.902 Disk stats (read/write): 00:13:28.902 nvme0n1: ios=4255/4608, merge=0/0, ticks=53457/49905, in_queue=103362, util=91.57% 00:13:28.902 nvme0n2: ios=4658/4839, merge=0/0, ticks=55954/49120, in_queue=105074, util=91.06% 00:13:28.902 nvme0n3: ios=3635/3655, merge=0/0, ticks=50457/41001, in_queue=91458, util=97.92% 00:13:28.902 nvme0n4: ios=3626/4039, merge=0/0, ticks=28892/26538, in_queue=55430, util=98.53% 00:13:28.902 17:30:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:28.902 [global] 00:13:28.902 thread=1 00:13:28.902 invalidate=1 00:13:28.902 rw=randwrite 00:13:28.902 time_based=1 00:13:28.902 runtime=1 00:13:28.902 ioengine=libaio 00:13:28.902 direct=1 00:13:28.902 bs=4096 00:13:28.902 iodepth=128 00:13:28.902 norandommap=0 00:13:28.902 numjobs=1 00:13:28.902 00:13:28.902 verify_dump=1 00:13:28.902 verify_backlog=512 00:13:28.902 verify_state_save=0 00:13:28.902 do_verify=1 00:13:28.902 verify=crc32c-intel 00:13:28.902 [job0] 00:13:28.902 filename=/dev/nvme0n1 00:13:28.902 [job1] 00:13:28.902 filename=/dev/nvme0n2 00:13:28.902 [job2] 00:13:28.902 filename=/dev/nvme0n3 00:13:28.902 [job3] 00:13:28.902 filename=/dev/nvme0n4 00:13:28.902 Could not set queue depth (nvme0n1) 00:13:28.902 Could not set queue depth (nvme0n2) 00:13:28.902 Could not set queue depth (nvme0n3) 00:13:28.902 Could not set queue depth (nvme0n4) 00:13:29.161 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:29.161 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:29.161 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:29.161 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:29.161 fio-3.35 00:13:29.161 Starting 4 threads 00:13:30.537 00:13:30.537 job0: (groupid=0, jobs=1): err= 0: pid=1002955: Mon Oct 14 17:30:29 2024 00:13:30.537 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:13:30.537 slat (nsec): min=1246, max=16217k, avg=128882.70, stdev=939016.56 00:13:30.537 clat (usec): min=4365, max=39121, avg=15532.42, stdev=5782.31 00:13:30.537 lat (usec): min=4371, max=39148, avg=15661.30, stdev=5860.71 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 5538], 5.00th=[10159], 10.00th=[10290], 20.00th=[10552], 00:13:30.537 | 30.00th=[11207], 40.00th=[11469], 50.00th=[13173], 60.00th=[15926], 00:13:30.537 | 70.00th=[18220], 80.00th=[22152], 90.00th=[23725], 95.00th=[27132], 00:13:30.537 | 99.00th=[31327], 99.50th=[32900], 99.90th=[34866], 99.95th=[37487], 00:13:30.537 | 99.99th=[39060] 00:13:30.537 write: IOPS=3273, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1011msec); 0 zone resets 00:13:30.537 slat (usec): min=2, max=17236, avg=177.59, stdev=983.67 00:13:30.537 clat (usec): min=1353, max=90907, avg=24346.61, stdev=17198.90 00:13:30.537 lat (usec): min=1366, max=90915, avg=24524.20, stdev=17272.43 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 3425], 5.00th=[ 8160], 10.00th=[10814], 20.00th=[11600], 00:13:30.537 | 30.00th=[12256], 40.00th=[16909], 50.00th=[19268], 60.00th=[22938], 00:13:30.537 | 70.00th=[25035], 80.00th=[32113], 90.00th=[49546], 95.00th=[65799], 00:13:30.537 | 99.00th=[84411], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:13:30.537 | 99.99th=[90702] 00:13:30.537 bw ( KiB/s): min=12263, max=13168, per=18.52%, avg=12715.50, stdev=639.93, samples=2 00:13:30.537 iops : min= 3065, max= 3292, avg=3178.50, stdev=160.51, samples=2 00:13:30.537 lat (msec) : 2=0.03%, 4=0.75%, 10=4.54%, 20=58.32%, 50=31.26% 00:13:30.537 lat (msec) : 100=5.09% 00:13:30.537 cpu : usr=3.07%, sys=3.47%, ctx=387, majf=0, minf=1 00:13:30.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:30.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.537 issued rwts: total=3072,3310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.537 job1: (groupid=0, jobs=1): err= 0: pid=1002970: Mon Oct 14 17:30:29 2024 00:13:30.537 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:13:30.537 slat (nsec): min=1109, max=24769k, avg=98013.05, stdev=820746.97 00:13:30.537 clat (usec): min=1484, max=71614, avg=12371.17, stdev=8028.16 00:13:30.537 lat (usec): min=1487, max=71620, avg=12469.18, stdev=8094.24 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 1991], 5.00th=[ 4146], 10.00th=[ 6980], 20.00th=[ 8848], 00:13:30.537 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:13:30.537 | 70.00th=[11469], 80.00th=[12780], 90.00th=[18220], 95.00th=[28181], 00:13:30.537 | 99.00th=[48497], 99.50th=[48497], 99.90th=[51119], 99.95th=[52691], 00:13:30.537 | 99.99th=[71828] 00:13:30.537 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1005msec); 0 zone resets 00:13:30.537 slat (nsec): min=1843, max=10319k, avg=83594.16, stdev=528370.86 00:13:30.537 clat (usec): min=676, max=30475, avg=11343.98, stdev=4987.60 00:13:30.537 lat (usec): min=685, max=30483, avg=11427.58, stdev=5017.09 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 2507], 5.00th=[ 4883], 10.00th=[ 6849], 20.00th=[ 8160], 00:13:30.537 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:13:30.537 | 70.00th=[11338], 80.00th=[14222], 90.00th=[17957], 95.00th=[22676], 00:13:30.537 | 99.00th=[27657], 99.50th=[28705], 99.90th=[30540], 99.95th=[30540], 00:13:30.537 | 99.99th=[30540] 00:13:30.537 bw ( KiB/s): min=16518, max=27296, per=31.91%, avg=21907.00, stdev=7621.20, samples=2 00:13:30.537 iops : min= 4129, max= 6824, avg=5476.50, stdev=1905.65, samples=2 00:13:30.537 lat (usec) : 750=0.03% 00:13:30.537 lat (msec) : 2=0.97%, 4=3.61%, 10=33.54%, 20=53.31%, 50=8.32% 00:13:30.537 lat (msec) : 100=0.22% 00:13:30.537 cpu : usr=3.78%, sys=4.48%, ctx=496, majf=0, minf=1 00:13:30.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:30.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.537 issued rwts: total=5120,5609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.537 job2: (groupid=0, jobs=1): err= 0: pid=1002990: Mon Oct 14 17:30:29 2024 00:13:30.537 read: IOPS=3676, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec) 00:13:30.537 slat (nsec): min=1715, max=18594k, avg=121533.47, stdev=858540.87 00:13:30.537 clat (usec): min=2883, max=49667, avg=15531.92, stdev=7724.60 00:13:30.537 lat (usec): min=6290, max=49696, avg=15653.45, stdev=7796.97 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10814], 00:13:30.537 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12256], 60.00th=[13435], 00:13:30.537 | 70.00th=[16450], 80.00th=[19530], 90.00th=[24249], 95.00th=[35914], 00:13:30.537 | 99.00th=[43779], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:13:30.537 | 99.99th=[49546] 00:13:30.537 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:13:30.537 slat (usec): min=2, max=16656, avg=122.36, stdev=719.47 00:13:30.537 clat (usec): min=602, max=56810, avg=17113.76, stdev=11758.37 00:13:30.537 lat (usec): min=639, max=56822, avg=17236.12, stdev=11846.62 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 3556], 5.00th=[ 7373], 10.00th=[ 8356], 20.00th=[10421], 00:13:30.537 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[14091], 00:13:30.537 | 70.00th=[17433], 80.00th=[20841], 90.00th=[36439], 95.00th=[48497], 00:13:30.537 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:13:30.537 | 99.99th=[56886] 00:13:30.537 bw ( KiB/s): min=12728, max=19960, per=23.81%, avg=16344.00, stdev=5113.80, samples=2 00:13:30.537 iops : min= 3182, max= 4990, avg=4086.00, stdev=1278.45, samples=2 00:13:30.537 lat (usec) : 750=0.01% 00:13:30.537 lat (msec) : 2=0.23%, 4=0.40%, 10=13.38%, 20=64.52%, 50=19.42% 00:13:30.537 lat (msec) : 100=2.05% 00:13:30.537 cpu : usr=3.28%, sys=5.67%, ctx=394, majf=0, minf=1 00:13:30.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:30.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.537 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.537 job3: (groupid=0, jobs=1): err= 0: pid=1002997: Mon Oct 14 17:30:29 2024 00:13:30.537 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:13:30.537 slat (nsec): min=1072, max=24259k, avg=117123.58, stdev=945540.11 00:13:30.537 clat (usec): min=1078, max=62379, avg=15038.95, stdev=8679.20 00:13:30.537 lat (usec): min=1084, max=62387, avg=15156.07, stdev=8752.70 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 1156], 5.00th=[ 6063], 10.00th=[ 8094], 20.00th=[10028], 00:13:30.537 | 30.00th=[10814], 40.00th=[11994], 50.00th=[13042], 60.00th=[13829], 00:13:30.537 | 70.00th=[16188], 80.00th=[18482], 90.00th=[24249], 95.00th=[30016], 00:13:30.537 | 99.00th=[55313], 99.50th=[57934], 99.90th=[60556], 99.95th=[62129], 00:13:30.537 | 99.99th=[62129] 00:13:30.537 write: IOPS=4296, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1009msec); 0 zone resets 00:13:30.537 slat (nsec): min=1860, max=21061k, avg=103479.16, stdev=744665.88 00:13:30.537 clat (usec): min=1563, max=62378, avg=15321.31, stdev=8983.63 00:13:30.537 lat (usec): min=1570, max=62386, avg=15424.79, stdev=9049.10 00:13:30.537 clat percentiles (usec): 00:13:30.537 | 1.00th=[ 2802], 5.00th=[ 3949], 10.00th=[ 7242], 20.00th=[ 8356], 00:13:30.537 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[13435], 60.00th=[15795], 00:13:30.537 | 70.00th=[18744], 80.00th=[21627], 90.00th=[24511], 95.00th=[28967], 00:13:30.537 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53740], 99.95th=[62129], 00:13:30.537 | 99.99th=[62129] 00:13:30.537 bw ( KiB/s): min=13832, max=19792, per=24.49%, avg=16812.00, stdev=4214.36, samples=2 00:13:30.537 iops : min= 3458, max= 4948, avg=4203.00, stdev=1053.59, samples=2 00:13:30.537 lat (msec) : 2=1.45%, 4=2.86%, 10=20.41%, 20=55.85%, 50=17.65% 00:13:30.537 lat (msec) : 100=1.78% 00:13:30.537 cpu : usr=3.08%, sys=3.87%, ctx=357, majf=0, minf=1 00:13:30.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:30.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.537 issued rwts: total=4096,4335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.537 00:13:30.537 Run status group 0 (all jobs): 00:13:30.537 READ: bw=61.8MiB/s (64.8MB/s), 11.9MiB/s-19.9MiB/s (12.4MB/s-20.9MB/s), io=62.5MiB (65.5MB), run=1005-1011msec 00:13:30.537 WRITE: bw=67.0MiB/s (70.3MB/s), 12.8MiB/s-21.8MiB/s (13.4MB/s-22.9MB/s), io=67.8MiB (71.1MB), run=1005-1011msec 00:13:30.537 00:13:30.537 Disk stats (read/write): 00:13:30.537 nvme0n1: ios=2610/2719, merge=0/0, ticks=40054/64944, in_queue=104998, util=86.27% 00:13:30.537 nvme0n2: ios=4127/4416, merge=0/0, ticks=28925/30012, in_queue=58937, util=100.00% 00:13:30.537 nvme0n3: ios=3622/3631, merge=0/0, ticks=36922/32011, in_queue=68933, util=97.81% 00:13:30.537 nvme0n4: ios=3411/3584, merge=0/0, ticks=43745/46738, in_queue=90483, util=90.63% 00:13:30.537 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:30.537 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1003172 00:13:30.537 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:30.537 17:30:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:30.537 [global] 00:13:30.537 thread=1 00:13:30.537 invalidate=1 00:13:30.537 rw=read 00:13:30.537 time_based=1 00:13:30.537 runtime=10 00:13:30.537 ioengine=libaio 00:13:30.537 direct=1 00:13:30.537 bs=4096 00:13:30.537 iodepth=1 00:13:30.537 norandommap=1 00:13:30.537 numjobs=1 00:13:30.537 00:13:30.537 [job0] 00:13:30.537 filename=/dev/nvme0n1 00:13:30.537 [job1] 00:13:30.537 filename=/dev/nvme0n2 00:13:30.537 [job2] 00:13:30.537 filename=/dev/nvme0n3 00:13:30.537 [job3] 00:13:30.537 filename=/dev/nvme0n4 00:13:30.537 Could not set queue depth (nvme0n1) 00:13:30.537 Could not set queue depth (nvme0n2) 00:13:30.537 Could not set queue depth (nvme0n3) 00:13:30.537 Could not set queue depth (nvme0n4) 00:13:30.796 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.796 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.796 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.796 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.796 fio-3.35 00:13:30.796 Starting 4 threads 00:13:33.329 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:33.587 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40882176, buflen=4096 00:13:33.587 fio: pid=1003480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:33.587 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:33.846 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8056832, buflen=4096 00:13:33.846 fio: pid=1003473, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:33.846 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.846 17:30:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:34.105 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57913344, buflen=4096 00:13:34.105 fio: pid=1003444, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:34.105 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.105 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:34.365 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.365 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:34.365 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13692928, buflen=4096 00:13:34.365 fio: pid=1003457, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:34.365 00:13:34.365 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003444: Mon Oct 14 17:30:33 2024 00:13:34.365 read: IOPS=4449, BW=17.4MiB/s (18.2MB/s)(55.2MiB/3178msec) 00:13:34.365 slat (usec): min=6, max=17668, avg=11.29, stdev=229.55 00:13:34.365 clat (usec): min=157, max=4226, avg=211.80, stdev=42.26 00:13:34.365 lat (usec): min=164, max=18029, avg=223.09, stdev=235.74 00:13:34.365 clat percentiles (usec): 00:13:34.365 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:13:34.365 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:13:34.365 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 269], 00:13:34.365 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 351], 99.95th=[ 424], 00:13:34.365 | 99.99th=[ 510] 00:13:34.365 bw ( KiB/s): min=14573, max=18976, per=52.34%, avg=17970.17, stdev=1741.15, samples=6 00:13:34.365 iops : min= 3643, max= 4744, avg=4492.50, stdev=435.38, samples=6 00:13:34.365 lat (usec) : 250=91.34%, 500=8.64%, 750=0.01% 00:13:34.365 lat (msec) : 10=0.01% 00:13:34.365 cpu : usr=1.04%, sys=4.22%, ctx=14147, majf=0, minf=2 00:13:34.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.365 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.365 issued rwts: total=14140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.365 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003457: Mon Oct 14 17:30:33 2024 00:13:34.365 read: IOPS=975, BW=3900KiB/s (3993kB/s)(13.1MiB/3429msec) 00:13:34.365 slat (usec): min=5, max=11766, avg=23.20, stdev=378.36 00:13:34.365 clat (usec): min=154, max=42019, avg=993.48, stdev=5594.16 00:13:34.365 lat (usec): min=161, max=42041, avg=1016.69, stdev=5607.08 00:13:34.365 clat percentiles (usec): 00:13:34.365 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 190], 00:13:34.365 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:13:34.365 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 265], 00:13:34.365 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:13:34.365 | 99.99th=[42206] 00:13:34.365 bw ( KiB/s): min= 96, max=17207, per=9.12%, avg=3131.83, stdev=6909.27, samples=6 00:13:34.365 iops : min= 24, max= 4301, avg=782.83, stdev=1727.01, samples=6 00:13:34.365 lat (usec) : 250=92.19%, 500=5.77% 00:13:34.365 lat (msec) : 4=0.03%, 10=0.06%, 50=1.91% 00:13:34.365 cpu : usr=0.23%, sys=0.90%, ctx=3350, majf=0, minf=1 00:13:34.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.365 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.365 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.365 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003473: Mon Oct 14 17:30:33 2024 00:13:34.365 read: IOPS=659, BW=2637KiB/s (2700kB/s)(7868KiB/2984msec) 00:13:34.365 slat (usec): min=6, max=11673, avg=17.28, stdev=303.56 00:13:34.365 clat (usec): min=168, max=41975, avg=1487.61, stdev=7030.93 00:13:34.365 lat (usec): min=176, max=41998, avg=1504.90, stdev=7037.28 00:13:34.365 clat percentiles (usec): 00:13:34.365 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:13:34.365 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:13:34.365 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 293], 00:13:34.365 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:13:34.365 | 99.99th=[42206] 00:13:34.365 bw ( KiB/s): min= 120, max= 320, per=0.56%, avg=192.00, stdev=83.52, samples=5 00:13:34.365 iops : min= 30, max= 80, avg=48.00, stdev=20.88, samples=5 00:13:34.365 lat (usec) : 250=80.79%, 500=15.85%, 750=0.15% 00:13:34.365 lat (msec) : 4=0.05%, 50=3.10% 00:13:34.365 cpu : usr=0.03%, sys=0.77%, ctx=1970, majf=0, minf=2 00:13:34.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.365 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.366 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.366 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003480: Mon Oct 14 17:30:33 2024 00:13:34.366 read: IOPS=3639, BW=14.2MiB/s (14.9MB/s)(39.0MiB/2743msec) 00:13:34.366 slat (nsec): min=2973, max=55154, avg=7547.22, stdev=1923.71 00:13:34.366 clat (usec): min=163, max=41832, avg=265.17, stdev=1289.10 00:13:34.366 lat (usec): min=171, max=41835, avg=272.72, stdev=1289.08 00:13:34.366 clat percentiles (usec): 00:13:34.366 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:13:34.366 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:13:34.366 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 273], 00:13:34.366 | 99.00th=[ 338], 99.50th=[ 375], 99.90th=[40633], 99.95th=[40633], 00:13:34.366 | 99.99th=[41681] 00:13:34.366 bw ( KiB/s): min=13352, max=16672, per=43.05%, avg=14779.20, stdev=1289.61, samples=5 00:13:34.366 iops : min= 3338, max= 4168, avg=3694.80, stdev=322.40, samples=5 00:13:34.366 lat (usec) : 250=84.11%, 500=15.75%, 750=0.02% 00:13:34.366 lat (msec) : 4=0.01%, 50=0.10% 00:13:34.366 cpu : usr=1.17%, sys=3.06%, ctx=9982, majf=0, minf=2 00:13:34.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.366 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.366 issued rwts: total=9982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:34.366 00:13:34.366 Run status group 0 (all jobs): 00:13:34.366 READ: bw=33.5MiB/s (35.2MB/s), 2637KiB/s-17.4MiB/s (2700kB/s-18.2MB/s), io=115MiB (121MB), run=2743-3429msec 00:13:34.366 00:13:34.366 Disk stats (read/write): 00:13:34.366 nvme0n1: ios=13890/0, merge=0/0, ticks=3783/0, in_queue=3783, util=98.03% 00:13:34.366 nvme0n2: ios=3341/0, merge=0/0, ticks=3233/0, in_queue=3233, util=94.98% 00:13:34.366 nvme0n3: ios=1578/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.52% 00:13:34.366 nvme0n4: ios=9618/0, merge=0/0, ticks=2484/0, in_queue=2484, util=96.41% 00:13:34.624 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.624 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:34.624 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.624 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:34.882 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.882 17:30:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:35.141 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:35.141 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1003172 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:35.400 nvmf hotplug test: fio failed as expected 00:13:35.400 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.659 rmmod nvme_tcp 00:13:35.659 rmmod nvme_fabrics 00:13:35.659 rmmod nvme_keyring 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1000459 ']' 00:13:35.659 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1000459 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1000459 ']' 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1000459 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1000459 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.660 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1000459' 00:13:35.660 killing process with pid 1000459 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1000459 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1000459 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.919 17:30:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:38.455 00:13:38.455 real 0m26.883s 00:13:38.455 user 1m46.542s 00:13:38.455 sys 0m8.665s 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.455 ************************************ 00:13:38.455 END TEST nvmf_fio_target 00:13:38.455 ************************************ 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:38.455 ************************************ 00:13:38.455 START TEST nvmf_bdevio 00:13:38.455 ************************************ 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:38.455 * Looking for test storage... 00:13:38.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.455 --rc genhtml_branch_coverage=1 00:13:38.455 --rc genhtml_function_coverage=1 00:13:38.455 --rc genhtml_legend=1 00:13:38.455 --rc geninfo_all_blocks=1 00:13:38.455 --rc geninfo_unexecuted_blocks=1 00:13:38.455 00:13:38.455 ' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.455 --rc genhtml_branch_coverage=1 00:13:38.455 --rc genhtml_function_coverage=1 00:13:38.455 --rc genhtml_legend=1 00:13:38.455 --rc geninfo_all_blocks=1 00:13:38.455 --rc geninfo_unexecuted_blocks=1 00:13:38.455 00:13:38.455 ' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.455 --rc genhtml_branch_coverage=1 00:13:38.455 --rc genhtml_function_coverage=1 00:13:38.455 --rc genhtml_legend=1 00:13:38.455 --rc geninfo_all_blocks=1 00:13:38.455 --rc geninfo_unexecuted_blocks=1 00:13:38.455 00:13:38.455 ' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.455 --rc genhtml_branch_coverage=1 00:13:38.455 --rc genhtml_function_coverage=1 00:13:38.455 --rc genhtml_legend=1 00:13:38.455 --rc geninfo_all_blocks=1 00:13:38.455 --rc geninfo_unexecuted_blocks=1 00:13:38.455 00:13:38.455 ' 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.455 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.456 17:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:45.025 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:45.025 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:45.025 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:45.026 Found net devices under 0000:86:00.0: cvl_0_0 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:45.026 Found net devices under 0000:86:00.1: cvl_0_1 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.026 17:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:13:45.026 00:13:45.026 --- 10.0.0.2 ping statistics --- 00:13:45.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.026 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:13:45.026 00:13:45.026 --- 10.0.0.1 ping statistics --- 00:13:45.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.026 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1007778 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1007778 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1007778 ']' 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 [2024-10-14 17:30:43.344699] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:13:45.026 [2024-10-14 17:30:43.344752] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.026 [2024-10-14 17:30:43.419327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.026 [2024-10-14 17:30:43.461435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.026 [2024-10-14 17:30:43.461472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.026 [2024-10-14 17:30:43.461479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.026 [2024-10-14 17:30:43.461484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.026 [2024-10-14 17:30:43.461489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.026 [2024-10-14 17:30:43.463109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.026 [2024-10-14 17:30:43.463217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:45.026 [2024-10-14 17:30:43.463336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.026 [2024-10-14 17:30:43.463338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 [2024-10-14 17:30:43.611249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 Malloc0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.026 [2024-10-14 17:30:43.672883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.026 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:45.027 { 00:13:45.027 "params": { 00:13:45.027 "name": "Nvme$subsystem", 00:13:45.027 "trtype": "$TEST_TRANSPORT", 00:13:45.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.027 "adrfam": "ipv4", 00:13:45.027 "trsvcid": "$NVMF_PORT", 00:13:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.027 "hdgst": ${hdgst:-false}, 00:13:45.027 "ddgst": ${ddgst:-false} 00:13:45.027 }, 00:13:45.027 "method": "bdev_nvme_attach_controller" 00:13:45.027 } 00:13:45.027 EOF 00:13:45.027 )") 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:13:45.027 17:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:45.027 "params": { 00:13:45.027 "name": "Nvme1", 00:13:45.027 "trtype": "tcp", 00:13:45.027 "traddr": "10.0.0.2", 00:13:45.027 "adrfam": "ipv4", 00:13:45.027 "trsvcid": "4420", 00:13:45.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.027 "hdgst": false, 00:13:45.027 "ddgst": false 00:13:45.027 }, 00:13:45.027 "method": "bdev_nvme_attach_controller" 00:13:45.027 }' 00:13:45.027 [2024-10-14 17:30:43.723325] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:13:45.027 [2024-10-14 17:30:43.723367] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007812 ] 00:13:45.027 [2024-10-14 17:30:43.794753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.027 [2024-10-14 17:30:43.838616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.027 [2024-10-14 17:30:43.838691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.027 [2024-10-14 17:30:43.838692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.027 I/O targets: 00:13:45.027 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:45.027 00:13:45.027 00:13:45.027 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.027 http://cunit.sourceforge.net/ 00:13:45.027 00:13:45.027 00:13:45.027 Suite: bdevio tests on: Nvme1n1 00:13:45.027 Test: blockdev write read block ...passed 00:13:45.027 Test: blockdev write zeroes read block ...passed 00:13:45.027 Test: blockdev write zeroes read no split ...passed 00:13:45.027 Test: blockdev write zeroes read split ...passed 00:13:45.027 Test: blockdev write zeroes read split partial ...passed 00:13:45.027 Test: blockdev reset ...[2024-10-14 17:30:44.155688] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:45.027 [2024-10-14 17:30:44.155750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133d400 (9): Bad file descriptor 00:13:45.286 [2024-10-14 17:30:44.210635] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:45.286 passed 00:13:45.286 Test: blockdev write read 8 blocks ...passed 00:13:45.286 Test: blockdev write read size > 128k ...passed 00:13:45.286 Test: blockdev write read invalid size ...passed 00:13:45.286 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:45.286 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:45.286 Test: blockdev write read max offset ...passed 00:13:45.286 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:45.286 Test: blockdev writev readv 8 blocks ...passed 00:13:45.545 Test: blockdev writev readv 30 x 1block ...passed 00:13:45.545 Test: blockdev writev readv block ...passed 00:13:45.545 Test: blockdev writev readv size > 128k ...passed 00:13:45.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:45.545 Test: blockdev comparev and writev ...[2024-10-14 17:30:44.505800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.545 [2024-10-14 17:30:44.505829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:45.545 [2024-10-14 17:30:44.505843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.545 [2024-10-14 17:30:44.505850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:45.545 [2024-10-14 17:30:44.506087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.545 [2024-10-14 17:30:44.506097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:45.545 [2024-10-14 17:30:44.506108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.545 [2024-10-14 17:30:44.506115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:45.545 [2024-10-14 17:30:44.506336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.546 [2024-10-14 17:30:44.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.506358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.546 [2024-10-14 17:30:44.506365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.506595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.546 [2024-10-14 17:30:44.506609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.506621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.546 [2024-10-14 17:30:44.506629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:45.546 passed 00:13:45.546 Test: blockdev nvme passthru rw ...passed 00:13:45.546 Test: blockdev nvme passthru vendor specific ...[2024-10-14 17:30:44.588894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.546 [2024-10-14 17:30:44.588913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.589020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.546 [2024-10-14 17:30:44.589031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.589148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.546 [2024-10-14 17:30:44.589158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:45.546 [2024-10-14 17:30:44.589270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.546 [2024-10-14 17:30:44.589281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:45.546 passed 00:13:45.546 Test: blockdev nvme admin passthru ...passed 00:13:45.546 Test: blockdev copy ...passed 00:13:45.546 00:13:45.546 Run Summary: Type Total Ran Passed Failed Inactive 00:13:45.546 suites 1 1 n/a 0 0 00:13:45.546 tests 23 23 23 0 0 00:13:45.546 asserts 152 152 152 0 n/a 00:13:45.546 00:13:45.546 Elapsed time = 1.221 seconds 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.805 rmmod nvme_tcp 00:13:45.805 rmmod nvme_fabrics 00:13:45.805 rmmod nvme_keyring 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1007778 ']' 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1007778 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1007778 ']' 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1007778 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1007778 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1007778' 00:13:45.805 killing process with pid 1007778 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1007778 00:13:45.805 17:30:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1007778 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.064 17:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.601 00:13:48.601 real 0m10.065s 00:13:48.601 user 0m10.395s 00:13:48.601 sys 0m5.009s 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:48.601 ************************************ 00:13:48.601 END TEST nvmf_bdevio 00:13:48.601 ************************************ 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:48.601 00:13:48.601 real 4m34.229s 00:13:48.601 user 10m19.398s 00:13:48.601 sys 1m40.150s 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:48.601 ************************************ 00:13:48.601 END TEST nvmf_target_core 00:13:48.601 ************************************ 00:13:48.601 17:30:47 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:48.601 17:30:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.601 17:30:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.601 17:30:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.601 ************************************ 00:13:48.601 START TEST nvmf_target_extra 00:13:48.601 ************************************ 00:13:48.601 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:48.601 * Looking for test storage... 00:13:48.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.602 --rc genhtml_branch_coverage=1 00:13:48.602 --rc genhtml_function_coverage=1 00:13:48.602 --rc genhtml_legend=1 00:13:48.602 --rc geninfo_all_blocks=1 00:13:48.602 --rc geninfo_unexecuted_blocks=1 00:13:48.602 00:13:48.602 ' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.602 --rc genhtml_branch_coverage=1 00:13:48.602 --rc genhtml_function_coverage=1 00:13:48.602 --rc genhtml_legend=1 00:13:48.602 --rc geninfo_all_blocks=1 00:13:48.602 --rc geninfo_unexecuted_blocks=1 00:13:48.602 00:13:48.602 ' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.602 --rc genhtml_branch_coverage=1 00:13:48.602 --rc genhtml_function_coverage=1 00:13:48.602 --rc genhtml_legend=1 00:13:48.602 --rc geninfo_all_blocks=1 00:13:48.602 --rc geninfo_unexecuted_blocks=1 00:13:48.602 00:13:48.602 ' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.602 --rc genhtml_branch_coverage=1 00:13:48.602 --rc genhtml_function_coverage=1 00:13:48.602 --rc genhtml_legend=1 00:13:48.602 --rc geninfo_all_blocks=1 00:13:48.602 --rc geninfo_unexecuted_blocks=1 00:13:48.602 00:13:48.602 ' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.602 ************************************ 00:13:48.602 START TEST nvmf_example 00:13:48.602 ************************************ 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:48.602 * Looking for test storage... 00:13:48.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.602 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.603 --rc genhtml_branch_coverage=1 00:13:48.603 --rc genhtml_function_coverage=1 00:13:48.603 --rc genhtml_legend=1 00:13:48.603 --rc geninfo_all_blocks=1 00:13:48.603 --rc geninfo_unexecuted_blocks=1 00:13:48.603 00:13:48.603 ' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.603 --rc genhtml_branch_coverage=1 00:13:48.603 --rc genhtml_function_coverage=1 00:13:48.603 --rc genhtml_legend=1 00:13:48.603 --rc geninfo_all_blocks=1 00:13:48.603 --rc geninfo_unexecuted_blocks=1 00:13:48.603 00:13:48.603 ' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.603 --rc genhtml_branch_coverage=1 00:13:48.603 --rc genhtml_function_coverage=1 00:13:48.603 --rc genhtml_legend=1 00:13:48.603 --rc geninfo_all_blocks=1 00:13:48.603 --rc geninfo_unexecuted_blocks=1 00:13:48.603 00:13:48.603 ' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.603 --rc genhtml_branch_coverage=1 00:13:48.603 --rc genhtml_function_coverage=1 00:13:48.603 --rc genhtml_legend=1 00:13:48.603 --rc geninfo_all_blocks=1 00:13:48.603 --rc geninfo_unexecuted_blocks=1 00:13:48.603 00:13:48.603 ' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.603 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.863 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:48.863 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:48.863 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.863 17:30:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:55.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:55.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.434 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:55.435 Found net devices under 0000:86:00.0: cvl_0_0 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:55.435 Found net devices under 0000:86:00.1: cvl_0_1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:13:55.435 00:13:55.435 --- 10.0.0.2 ping statistics --- 00:13:55.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.435 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:55.435 00:13:55.435 --- 10.0.0.1 ping statistics --- 00:13:55.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.435 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1011749 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1011749 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1011749 ']' 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.435 17:30:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:55.695 17:30:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:08.057 Initializing NVMe Controllers 00:14:08.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.057 Initialization complete. Launching workers. 00:14:08.057 ======================================================== 00:14:08.057 Latency(us) 00:14:08.057 Device Information : IOPS MiB/s Average min max 00:14:08.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18039.26 70.47 3547.42 526.44 15465.43 00:14:08.057 ======================================================== 00:14:08.057 Total : 18039.26 70.47 3547.42 526.44 15465.43 00:14:08.057 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.057 rmmod nvme_tcp 00:14:08.057 rmmod nvme_fabrics 00:14:08.057 rmmod nvme_keyring 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1011749 ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1011749 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1011749 ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1011749 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1011749 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1011749' 00:14:08.057 killing process with pid 1011749 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1011749 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1011749 00:14:08.057 nvmf threads initialize successfully 00:14:08.057 bdev subsystem init successfully 00:14:08.057 created a nvmf target service 00:14:08.057 create targets's poll groups done 00:14:08.057 all subsystems of target started 00:14:08.057 nvmf target is running 00:14:08.057 all subsystems of target stopped 00:14:08.057 destroy targets's poll groups done 00:14:08.057 destroyed the nvmf target service 00:14:08.057 bdev subsystem finish successfully 00:14:08.057 nvmf threads destroy successfully 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.057 17:31:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.316 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.316 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:08.316 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:08.316 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.575 00:14:08.575 real 0m19.964s 00:14:08.575 user 0m46.193s 00:14:08.575 sys 0m6.181s 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:08.575 ************************************ 00:14:08.575 END TEST nvmf_example 00:14:08.575 ************************************ 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.575 ************************************ 00:14:08.575 START TEST nvmf_filesystem 00:14:08.575 ************************************ 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:08.575 * Looking for test storage... 00:14:08.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.575 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.576 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:08.838 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.839 --rc genhtml_branch_coverage=1 00:14:08.839 --rc genhtml_function_coverage=1 00:14:08.839 --rc genhtml_legend=1 00:14:08.839 --rc geninfo_all_blocks=1 00:14:08.839 --rc geninfo_unexecuted_blocks=1 00:14:08.839 00:14:08.839 ' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.839 --rc genhtml_branch_coverage=1 00:14:08.839 --rc genhtml_function_coverage=1 00:14:08.839 --rc genhtml_legend=1 00:14:08.839 --rc geninfo_all_blocks=1 00:14:08.839 --rc geninfo_unexecuted_blocks=1 00:14:08.839 00:14:08.839 ' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.839 --rc genhtml_branch_coverage=1 00:14:08.839 --rc genhtml_function_coverage=1 00:14:08.839 --rc genhtml_legend=1 00:14:08.839 --rc geninfo_all_blocks=1 00:14:08.839 --rc geninfo_unexecuted_blocks=1 00:14:08.839 00:14:08.839 ' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.839 --rc genhtml_branch_coverage=1 00:14:08.839 --rc genhtml_function_coverage=1 00:14:08.839 --rc genhtml_legend=1 00:14:08.839 --rc geninfo_all_blocks=1 00:14:08.839 --rc geninfo_unexecuted_blocks=1 00:14:08.839 00:14:08.839 ' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:14:08.839 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:08.840 #define SPDK_CONFIG_H 00:14:08.840 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:08.840 #define SPDK_CONFIG_APPS 1 00:14:08.840 #define SPDK_CONFIG_ARCH native 00:14:08.840 #undef SPDK_CONFIG_ASAN 00:14:08.840 #undef SPDK_CONFIG_AVAHI 00:14:08.840 #undef SPDK_CONFIG_CET 00:14:08.840 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:08.840 #define SPDK_CONFIG_COVERAGE 1 00:14:08.840 #define SPDK_CONFIG_CROSS_PREFIX 00:14:08.840 #undef SPDK_CONFIG_CRYPTO 00:14:08.840 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:08.840 #undef SPDK_CONFIG_CUSTOMOCF 00:14:08.840 #undef SPDK_CONFIG_DAOS 00:14:08.840 #define SPDK_CONFIG_DAOS_DIR 00:14:08.840 #define SPDK_CONFIG_DEBUG 1 00:14:08.840 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:08.840 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:08.840 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:08.840 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:08.840 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:08.840 #undef SPDK_CONFIG_DPDK_UADK 00:14:08.840 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:08.840 #define SPDK_CONFIG_EXAMPLES 1 00:14:08.840 #undef SPDK_CONFIG_FC 00:14:08.840 #define SPDK_CONFIG_FC_PATH 00:14:08.840 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:08.840 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:08.840 #define SPDK_CONFIG_FSDEV 1 00:14:08.840 #undef SPDK_CONFIG_FUSE 00:14:08.840 #undef SPDK_CONFIG_FUZZER 00:14:08.840 #define SPDK_CONFIG_FUZZER_LIB 00:14:08.840 #undef SPDK_CONFIG_GOLANG 00:14:08.840 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:08.840 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:08.840 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:08.840 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:08.840 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:08.840 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:08.840 #undef SPDK_CONFIG_HAVE_LZ4 00:14:08.840 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:08.840 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:08.840 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:08.840 #define SPDK_CONFIG_IDXD 1 00:14:08.840 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:08.840 #undef SPDK_CONFIG_IPSEC_MB 00:14:08.840 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:08.840 #define SPDK_CONFIG_ISAL 1 00:14:08.840 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:08.840 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:08.840 #define SPDK_CONFIG_LIBDIR 00:14:08.840 #undef SPDK_CONFIG_LTO 00:14:08.840 #define SPDK_CONFIG_MAX_LCORES 128 00:14:08.840 #define SPDK_CONFIG_NVME_CUSE 1 00:14:08.840 #undef SPDK_CONFIG_OCF 00:14:08.840 #define SPDK_CONFIG_OCF_PATH 00:14:08.840 #define SPDK_CONFIG_OPENSSL_PATH 00:14:08.840 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:08.840 #define SPDK_CONFIG_PGO_DIR 00:14:08.840 #undef SPDK_CONFIG_PGO_USE 00:14:08.840 #define SPDK_CONFIG_PREFIX /usr/local 00:14:08.840 #undef SPDK_CONFIG_RAID5F 00:14:08.840 #undef SPDK_CONFIG_RBD 00:14:08.840 #define SPDK_CONFIG_RDMA 1 00:14:08.840 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:08.840 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:08.840 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:08.840 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:08.840 #define SPDK_CONFIG_SHARED 1 00:14:08.840 #undef SPDK_CONFIG_SMA 00:14:08.840 #define SPDK_CONFIG_TESTS 1 00:14:08.840 #undef SPDK_CONFIG_TSAN 00:14:08.840 #define SPDK_CONFIG_UBLK 1 00:14:08.840 #define SPDK_CONFIG_UBSAN 1 00:14:08.840 #undef SPDK_CONFIG_UNIT_TESTS 00:14:08.840 #undef SPDK_CONFIG_URING 00:14:08.840 #define SPDK_CONFIG_URING_PATH 00:14:08.840 #undef SPDK_CONFIG_URING_ZNS 00:14:08.840 #undef SPDK_CONFIG_USDT 00:14:08.840 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:08.840 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:08.840 #define SPDK_CONFIG_VFIO_USER 1 00:14:08.840 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:08.840 #define SPDK_CONFIG_VHOST 1 00:14:08.840 #define SPDK_CONFIG_VIRTIO 1 00:14:08.840 #undef SPDK_CONFIG_VTUNE 00:14:08.840 #define SPDK_CONFIG_VTUNE_DIR 00:14:08.840 #define SPDK_CONFIG_WERROR 1 00:14:08.840 #define SPDK_CONFIG_WPDK_DIR 00:14:08.840 #undef SPDK_CONFIG_XNVME 00:14:08.840 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:08.840 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:08.841 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1014076 ]] 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1014076 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:08.842 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.FOV4H7 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FOV4H7/tests/target /tmp/spdk.FOV4H7 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=606707712 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677722112 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189289967616 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963949056 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6673981440 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971941376 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981972480 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981513728 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=462848 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:08.843 * Looking for test storage... 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189289967616 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8888573952 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:08.843 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:08.844 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.103 --rc genhtml_branch_coverage=1 00:14:09.103 --rc genhtml_function_coverage=1 00:14:09.103 --rc genhtml_legend=1 00:14:09.103 --rc geninfo_all_blocks=1 00:14:09.103 --rc geninfo_unexecuted_blocks=1 00:14:09.103 00:14:09.103 ' 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.103 --rc genhtml_branch_coverage=1 00:14:09.103 --rc genhtml_function_coverage=1 00:14:09.103 --rc genhtml_legend=1 00:14:09.103 --rc geninfo_all_blocks=1 00:14:09.103 --rc geninfo_unexecuted_blocks=1 00:14:09.103 00:14:09.103 ' 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.103 --rc genhtml_branch_coverage=1 00:14:09.103 --rc genhtml_function_coverage=1 00:14:09.103 --rc genhtml_legend=1 00:14:09.103 --rc geninfo_all_blocks=1 00:14:09.103 --rc geninfo_unexecuted_blocks=1 00:14:09.103 00:14:09.103 ' 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:09.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.103 --rc genhtml_branch_coverage=1 00:14:09.103 --rc genhtml_function_coverage=1 00:14:09.103 --rc genhtml_legend=1 00:14:09.103 --rc geninfo_all_blocks=1 00:14:09.103 --rc geninfo_unexecuted_blocks=1 00:14:09.103 00:14:09.103 ' 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.103 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.104 17:31:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.104 17:31:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.678 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.678 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.678 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.679 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.679 17:31:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:14:15.679 00:14:15.679 --- 10.0.0.2 ping statistics --- 00:14:15.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.679 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:14:15.679 00:14:15.679 --- 10.0.0.1 ping statistics --- 00:14:15.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.679 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 ************************************ 00:14:15.679 START TEST nvmf_filesystem_no_in_capsule 00:14:15.679 ************************************ 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1017293 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1017293 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1017293 ']' 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 [2024-10-14 17:31:14.150221] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:14:15.679 [2024-10-14 17:31:14.150259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.679 [2024-10-14 17:31:14.222470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.679 [2024-10-14 17:31:14.264811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.679 [2024-10-14 17:31:14.264846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.679 [2024-10-14 17:31:14.264852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.679 [2024-10-14 17:31:14.264858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.679 [2024-10-14 17:31:14.264863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.679 [2024-10-14 17:31:14.267619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.679 [2024-10-14 17:31:14.267649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.679 [2024-10-14 17:31:14.267758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.679 [2024-10-14 17:31:14.267759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 [2024-10-14 17:31:14.415778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.679 [2024-10-14 17:31:14.559578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:15.679 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:15.680 { 00:14:15.680 "name": "Malloc1", 00:14:15.680 "aliases": [ 00:14:15.680 "8a226e56-ded6-4db0-a45b-c53b6bb4c5a1" 00:14:15.680 ], 00:14:15.680 "product_name": "Malloc disk", 00:14:15.680 "block_size": 512, 00:14:15.680 "num_blocks": 1048576, 00:14:15.680 "uuid": "8a226e56-ded6-4db0-a45b-c53b6bb4c5a1", 00:14:15.680 "assigned_rate_limits": { 00:14:15.680 "rw_ios_per_sec": 0, 00:14:15.680 "rw_mbytes_per_sec": 0, 00:14:15.680 "r_mbytes_per_sec": 0, 00:14:15.680 "w_mbytes_per_sec": 0 00:14:15.680 }, 00:14:15.680 "claimed": true, 00:14:15.680 "claim_type": "exclusive_write", 00:14:15.680 "zoned": false, 00:14:15.680 "supported_io_types": { 00:14:15.680 "read": true, 00:14:15.680 "write": true, 00:14:15.680 "unmap": true, 00:14:15.680 "flush": true, 00:14:15.680 "reset": true, 00:14:15.680 "nvme_admin": false, 00:14:15.680 "nvme_io": false, 00:14:15.680 "nvme_io_md": false, 00:14:15.680 "write_zeroes": true, 00:14:15.680 "zcopy": true, 00:14:15.680 "get_zone_info": false, 00:14:15.680 "zone_management": false, 00:14:15.680 "zone_append": false, 00:14:15.680 "compare": false, 00:14:15.680 "compare_and_write": false, 00:14:15.680 "abort": true, 00:14:15.680 "seek_hole": false, 00:14:15.680 "seek_data": false, 00:14:15.680 "copy": true, 00:14:15.680 "nvme_iov_md": false 00:14:15.680 }, 00:14:15.680 "memory_domains": [ 00:14:15.680 { 00:14:15.680 "dma_device_id": "system", 00:14:15.680 "dma_device_type": 1 00:14:15.680 }, 00:14:15.680 { 00:14:15.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.680 "dma_device_type": 2 00:14:15.680 } 00:14:15.680 ], 00:14:15.680 "driver_specific": {} 00:14:15.680 } 00:14:15.680 ]' 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:15.680 17:31:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:17.058 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.058 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.058 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.058 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:17.058 17:31:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:18.964 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:18.965 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:18.965 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:18.965 17:31:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:19.903 17:31:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.840 ************************************ 00:14:20.840 START TEST filesystem_ext4 00:14:20.840 ************************************ 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:20.840 17:31:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:20.840 mke2fs 1.47.0 (5-Feb-2023) 00:14:20.840 Discarding device blocks: 0/522240 done 00:14:20.840 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:20.840 Filesystem UUID: a9a8e4ab-1434-42ee-a666-771b179cc968 00:14:20.840 Superblock backups stored on blocks: 00:14:20.840 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:20.840 00:14:20.840 Allocating group tables: 0/64 done 00:14:20.840 Writing inode tables: 0/64 done 00:14:23.376 Creating journal (8192 blocks): done 00:14:25.272 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:14:25.272 00:14:25.272 17:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:25.272 17:31:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1017293 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:31.842 00:14:31.842 real 0m10.744s 00:14:31.842 user 0m0.030s 00:14:31.842 sys 0m0.075s 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:31.842 ************************************ 00:14:31.842 END TEST filesystem_ext4 00:14:31.842 ************************************ 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:31.842 ************************************ 00:14:31.842 START TEST filesystem_btrfs 00:14:31.842 ************************************ 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:31.842 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:31.842 btrfs-progs v6.8.1 00:14:31.842 See https://btrfs.readthedocs.io for more information. 00:14:31.842 00:14:31.842 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:31.842 NOTE: several default settings have changed in version 5.15, please make sure 00:14:31.842 this does not affect your deployments: 00:14:31.842 - DUP for metadata (-m dup) 00:14:31.842 - enabled no-holes (-O no-holes) 00:14:31.842 - enabled free-space-tree (-R free-space-tree) 00:14:31.842 00:14:31.842 Label: (null) 00:14:31.842 UUID: 55f733e6-0464-4573-8ebb-68c1ed9c97f5 00:14:31.842 Node size: 16384 00:14:31.842 Sector size: 4096 (CPU page size: 4096) 00:14:31.842 Filesystem size: 510.00MiB 00:14:31.842 Block group profiles: 00:14:31.842 Data: single 8.00MiB 00:14:31.842 Metadata: DUP 32.00MiB 00:14:31.842 System: DUP 8.00MiB 00:14:31.842 SSD detected: yes 00:14:31.843 Zoned device: no 00:14:31.843 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:31.843 Checksum: crc32c 00:14:31.843 Number of devices: 1 00:14:31.843 Devices: 00:14:31.843 ID SIZE PATH 00:14:31.843 1 510.00MiB /dev/nvme0n1p1 00:14:31.843 00:14:31.843 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:31.843 17:31:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1017293 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:32.780 00:14:32.780 real 0m1.126s 00:14:32.780 user 0m0.026s 00:14:32.780 sys 0m0.116s 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.780 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:32.780 ************************************ 00:14:32.780 END TEST filesystem_btrfs 00:14:32.780 ************************************ 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:32.781 ************************************ 00:14:32.781 START TEST filesystem_xfs 00:14:32.781 ************************************ 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:32.781 17:31:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:32.781 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:32.781 = sectsz=512 attr=2, projid32bit=1 00:14:32.781 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:32.781 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:32.781 data = bsize=4096 blocks=130560, imaxpct=25 00:14:32.781 = sunit=0 swidth=0 blks 00:14:32.781 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:32.781 log =internal log bsize=4096 blocks=16384, version=2 00:14:32.781 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:32.781 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:34.159 Discarding blocks...Done. 00:14:34.160 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:34.160 17:31:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:35.538 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1017293 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:35.798 00:14:35.798 real 0m3.065s 00:14:35.798 user 0m0.026s 00:14:35.798 sys 0m0.071s 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:35.798 ************************************ 00:14:35.798 END TEST filesystem_xfs 00:14:35.798 ************************************ 00:14:35.798 17:31:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:36.057 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:36.058 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.058 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.058 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:36.058 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:36.058 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1017293 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1017293 ']' 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1017293 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017293 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017293' 00:14:36.317 killing process with pid 1017293 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1017293 00:14:36.317 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1017293 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:36.577 00:14:36.577 real 0m21.501s 00:14:36.577 user 1m24.729s 00:14:36.577 sys 0m1.544s 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.577 ************************************ 00:14:36.577 END TEST nvmf_filesystem_no_in_capsule 00:14:36.577 ************************************ 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:36.577 ************************************ 00:14:36.577 START TEST nvmf_filesystem_in_capsule 00:14:36.577 ************************************ 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1021000 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1021000 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1021000 ']' 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.577 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.836 [2024-10-14 17:31:35.726723] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:14:36.836 [2024-10-14 17:31:35.726766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.836 [2024-10-14 17:31:35.800625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.836 [2024-10-14 17:31:35.844436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.836 [2024-10-14 17:31:35.844476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.836 [2024-10-14 17:31:35.844483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.836 [2024-10-14 17:31:35.844489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.836 [2024-10-14 17:31:35.844494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.836 [2024-10-14 17:31:35.846048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.836 [2024-10-14 17:31:35.846157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.836 [2024-10-14 17:31:35.846266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.836 [2024-10-14 17:31:35.846266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.836 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.096 [2024-10-14 17:31:35.982299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.096 17:31:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.096 Malloc1 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.096 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 [2024-10-14 17:31:36.128524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:37.097 { 00:14:37.097 "name": "Malloc1", 00:14:37.097 "aliases": [ 00:14:37.097 "03fd36ed-6edc-404e-b2f6-041156c8a52d" 00:14:37.097 ], 00:14:37.097 "product_name": "Malloc disk", 00:14:37.097 "block_size": 512, 00:14:37.097 "num_blocks": 1048576, 00:14:37.097 "uuid": "03fd36ed-6edc-404e-b2f6-041156c8a52d", 00:14:37.097 "assigned_rate_limits": { 00:14:37.097 "rw_ios_per_sec": 0, 00:14:37.097 "rw_mbytes_per_sec": 0, 00:14:37.097 "r_mbytes_per_sec": 0, 00:14:37.097 "w_mbytes_per_sec": 0 00:14:37.097 }, 00:14:37.097 "claimed": true, 00:14:37.097 "claim_type": "exclusive_write", 00:14:37.097 "zoned": false, 00:14:37.097 "supported_io_types": { 00:14:37.097 "read": true, 00:14:37.097 "write": true, 00:14:37.097 "unmap": true, 00:14:37.097 "flush": true, 00:14:37.097 "reset": true, 00:14:37.097 "nvme_admin": false, 00:14:37.097 "nvme_io": false, 00:14:37.097 "nvme_io_md": false, 00:14:37.097 "write_zeroes": true, 00:14:37.097 "zcopy": true, 00:14:37.097 "get_zone_info": false, 00:14:37.097 "zone_management": false, 00:14:37.097 "zone_append": false, 00:14:37.097 "compare": false, 00:14:37.097 "compare_and_write": false, 00:14:37.097 "abort": true, 00:14:37.097 "seek_hole": false, 00:14:37.097 "seek_data": false, 00:14:37.097 "copy": true, 00:14:37.097 "nvme_iov_md": false 00:14:37.097 }, 00:14:37.097 "memory_domains": [ 00:14:37.097 { 00:14:37.097 "dma_device_id": "system", 00:14:37.097 "dma_device_type": 1 00:14:37.097 }, 00:14:37.097 { 00:14:37.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.097 "dma_device_type": 2 00:14:37.097 } 00:14:37.097 ], 00:14:37.097 "driver_specific": {} 00:14:37.097 } 00:14:37.097 ]' 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:37.097 17:31:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.476 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.476 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:38.476 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.476 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:38.476 17:31:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:40.380 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:40.639 17:31:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:41.206 17:31:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:42.144 ************************************ 00:14:42.144 START TEST filesystem_in_capsule_ext4 00:14:42.144 ************************************ 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:42.144 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:42.144 mke2fs 1.47.0 (5-Feb-2023) 00:14:42.403 Discarding device blocks: 0/522240 done 00:14:42.403 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:42.403 Filesystem UUID: 126ebf25-5da5-4ce1-b202-b1ddea925d14 00:14:42.403 Superblock backups stored on blocks: 00:14:42.403 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:42.403 00:14:42.403 Allocating group tables: 0/64 done 00:14:42.403 Writing inode tables: 0/64 done 00:14:42.403 Creating journal (8192 blocks): done 00:14:42.403 Writing superblocks and filesystem accounting information: 0/64 done 00:14:42.403 00:14:42.403 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:42.403 17:31:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1021000 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:48.974 00:14:48.974 real 0m6.262s 00:14:48.974 user 0m0.024s 00:14:48.974 sys 0m0.071s 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:48.974 ************************************ 00:14:48.974 END TEST filesystem_in_capsule_ext4 00:14:48.974 ************************************ 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.974 ************************************ 00:14:48.974 START TEST filesystem_in_capsule_btrfs 00:14:48.974 ************************************ 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:48.974 btrfs-progs v6.8.1 00:14:48.974 See https://btrfs.readthedocs.io for more information. 00:14:48.974 00:14:48.974 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:48.974 NOTE: several default settings have changed in version 5.15, please make sure 00:14:48.974 this does not affect your deployments: 00:14:48.974 - DUP for metadata (-m dup) 00:14:48.974 - enabled no-holes (-O no-holes) 00:14:48.974 - enabled free-space-tree (-R free-space-tree) 00:14:48.974 00:14:48.974 Label: (null) 00:14:48.974 UUID: e6b5f5dd-15b2-4be9-9f25-fb2a8a9a2199 00:14:48.974 Node size: 16384 00:14:48.974 Sector size: 4096 (CPU page size: 4096) 00:14:48.974 Filesystem size: 510.00MiB 00:14:48.974 Block group profiles: 00:14:48.974 Data: single 8.00MiB 00:14:48.974 Metadata: DUP 32.00MiB 00:14:48.974 System: DUP 8.00MiB 00:14:48.974 SSD detected: yes 00:14:48.974 Zoned device: no 00:14:48.974 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:48.974 Checksum: crc32c 00:14:48.974 Number of devices: 1 00:14:48.974 Devices: 00:14:48.974 ID SIZE PATH 00:14:48.974 1 510.00MiB /dev/nvme0n1p1 00:14:48.974 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:48.974 17:31:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1021000 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:49.911 00:14:49.911 real 0m1.206s 00:14:49.911 user 0m0.032s 00:14:49.911 sys 0m0.109s 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:49.911 ************************************ 00:14:49.911 END TEST filesystem_in_capsule_btrfs 00:14:49.911 ************************************ 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:49.911 ************************************ 00:14:49.911 START TEST filesystem_in_capsule_xfs 00:14:49.911 ************************************ 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:49.911 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:49.912 17:31:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:49.912 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:49.912 = sectsz=512 attr=2, projid32bit=1 00:14:49.912 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:49.912 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:49.912 data = bsize=4096 blocks=130560, imaxpct=25 00:14:49.912 = sunit=0 swidth=0 blks 00:14:49.912 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:49.912 log =internal log bsize=4096 blocks=16384, version=2 00:14:49.912 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:49.912 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:50.849 Discarding blocks...Done. 00:14:50.849 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:50.849 17:31:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1021000 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:53.385 00:14:53.385 real 0m3.335s 00:14:53.385 user 0m0.018s 00:14:53.385 sys 0m0.079s 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:53.385 ************************************ 00:14:53.385 END TEST filesystem_in_capsule_xfs 00:14:53.385 ************************************ 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:53.385 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.644 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.644 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:53.644 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:53.644 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1021000 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1021000 ']' 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1021000 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1021000 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1021000' 00:14:53.645 killing process with pid 1021000 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1021000 00:14:53.645 17:31:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1021000 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:54.214 00:14:54.214 real 0m17.381s 00:14:54.214 user 1m8.373s 00:14:54.214 sys 0m1.429s 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:54.214 ************************************ 00:14:54.214 END TEST nvmf_filesystem_in_capsule 00:14:54.214 ************************************ 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.214 rmmod nvme_tcp 00:14:54.214 rmmod nvme_fabrics 00:14:54.214 rmmod nvme_keyring 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.214 17:31:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.119 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.119 00:14:56.119 real 0m47.659s 00:14:56.119 user 2m35.127s 00:14:56.119 sys 0m7.747s 00:14:56.119 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.119 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:56.119 ************************************ 00:14:56.119 END TEST nvmf_filesystem 00:14:56.120 ************************************ 00:14:56.120 17:31:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:56.120 17:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:56.120 17:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.120 17:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.380 ************************************ 00:14:56.380 START TEST nvmf_target_discovery 00:14:56.380 ************************************ 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:56.380 * Looking for test storage... 00:14:56.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.380 --rc genhtml_branch_coverage=1 00:14:56.380 --rc genhtml_function_coverage=1 00:14:56.380 --rc genhtml_legend=1 00:14:56.380 --rc geninfo_all_blocks=1 00:14:56.380 --rc geninfo_unexecuted_blocks=1 00:14:56.380 00:14:56.380 ' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.380 --rc genhtml_branch_coverage=1 00:14:56.380 --rc genhtml_function_coverage=1 00:14:56.380 --rc genhtml_legend=1 00:14:56.380 --rc geninfo_all_blocks=1 00:14:56.380 --rc geninfo_unexecuted_blocks=1 00:14:56.380 00:14:56.380 ' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.380 --rc genhtml_branch_coverage=1 00:14:56.380 --rc genhtml_function_coverage=1 00:14:56.380 --rc genhtml_legend=1 00:14:56.380 --rc geninfo_all_blocks=1 00:14:56.380 --rc geninfo_unexecuted_blocks=1 00:14:56.380 00:14:56.380 ' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:56.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.380 --rc genhtml_branch_coverage=1 00:14:56.380 --rc genhtml_function_coverage=1 00:14:56.380 --rc genhtml_legend=1 00:14:56.380 --rc geninfo_all_blocks=1 00:14:56.380 --rc geninfo_unexecuted_blocks=1 00:14:56.380 00:14:56.380 ' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.380 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.381 17:31:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:02.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:02.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:02.952 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:02.953 Found net devices under 0000:86:00.0: cvl_0_0 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:02.953 Found net devices under 0000:86:00.1: cvl_0_1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:02.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:15:02.953 00:15:02.953 --- 10.0.0.2 ping statistics --- 00:15:02.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.953 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:02.953 00:15:02.953 --- 10.0.0.1 ping statistics --- 00:15:02.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.953 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1027706 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1027706 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1027706 ']' 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.953 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.953 [2024-10-14 17:32:01.546295] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:15:02.953 [2024-10-14 17:32:01.546347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.953 [2024-10-14 17:32:01.619464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.953 [2024-10-14 17:32:01.663050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.953 [2024-10-14 17:32:01.663086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.954 [2024-10-14 17:32:01.663094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.954 [2024-10-14 17:32:01.663100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.954 [2024-10-14 17:32:01.663105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.954 [2024-10-14 17:32:01.664674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.954 [2024-10-14 17:32:01.664782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.954 [2024-10-14 17:32:01.664911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.954 [2024-10-14 17:32:01.664911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 [2024-10-14 17:32:01.809879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 Null1 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 [2024-10-14 17:32:01.855243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 Null2 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 Null3 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 Null4 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.954 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.955 17:32:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:03.213 00:15:03.213 Discovery Log Number of Records 6, Generation counter 6 00:15:03.213 =====Discovery Log Entry 0====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: current discovery subsystem 00:15:03.213 treq: not required 00:15:03.213 portid: 0 00:15:03.213 trsvcid: 4420 00:15:03.213 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:03.213 traddr: 10.0.0.2 00:15:03.213 eflags: explicit discovery connections, duplicate discovery information 00:15:03.213 sectype: none 00:15:03.213 =====Discovery Log Entry 1====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: nvme subsystem 00:15:03.213 treq: not required 00:15:03.213 portid: 0 00:15:03.213 trsvcid: 4420 00:15:03.213 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:03.213 traddr: 10.0.0.2 00:15:03.213 eflags: none 00:15:03.213 sectype: none 00:15:03.213 =====Discovery Log Entry 2====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: nvme subsystem 00:15:03.213 treq: not required 00:15:03.213 portid: 0 00:15:03.213 trsvcid: 4420 00:15:03.213 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:03.213 traddr: 10.0.0.2 00:15:03.213 eflags: none 00:15:03.213 sectype: none 00:15:03.213 =====Discovery Log Entry 3====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: nvme subsystem 00:15:03.213 treq: not required 00:15:03.213 portid: 0 00:15:03.213 trsvcid: 4420 00:15:03.213 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:03.213 traddr: 10.0.0.2 00:15:03.213 eflags: none 00:15:03.213 sectype: none 00:15:03.213 =====Discovery Log Entry 4====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: nvme subsystem 00:15:03.213 treq: not required 00:15:03.213 portid: 0 00:15:03.213 trsvcid: 4420 00:15:03.213 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:03.213 traddr: 10.0.0.2 00:15:03.213 eflags: none 00:15:03.213 sectype: none 00:15:03.213 =====Discovery Log Entry 5====== 00:15:03.213 trtype: tcp 00:15:03.213 adrfam: ipv4 00:15:03.213 subtype: discovery subsystem referral 00:15:03.214 treq: not required 00:15:03.214 portid: 0 00:15:03.214 trsvcid: 4430 00:15:03.214 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:03.214 traddr: 10.0.0.2 00:15:03.214 eflags: none 00:15:03.214 sectype: none 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:03.214 Perform nvmf subsystem discovery via RPC 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 [ 00:15:03.214 { 00:15:03.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:03.214 "subtype": "Discovery", 00:15:03.214 "listen_addresses": [ 00:15:03.214 { 00:15:03.214 "trtype": "TCP", 00:15:03.214 "adrfam": "IPv4", 00:15:03.214 "traddr": "10.0.0.2", 00:15:03.214 "trsvcid": "4420" 00:15:03.214 } 00:15:03.214 ], 00:15:03.214 "allow_any_host": true, 00:15:03.214 "hosts": [] 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.214 "subtype": "NVMe", 00:15:03.214 "listen_addresses": [ 00:15:03.214 { 00:15:03.214 "trtype": "TCP", 00:15:03.214 "adrfam": "IPv4", 00:15:03.214 "traddr": "10.0.0.2", 00:15:03.214 "trsvcid": "4420" 00:15:03.214 } 00:15:03.214 ], 00:15:03.214 "allow_any_host": true, 00:15:03.214 "hosts": [], 00:15:03.214 "serial_number": "SPDK00000000000001", 00:15:03.214 "model_number": "SPDK bdev Controller", 00:15:03.214 "max_namespaces": 32, 00:15:03.214 "min_cntlid": 1, 00:15:03.214 "max_cntlid": 65519, 00:15:03.214 "namespaces": [ 00:15:03.214 { 00:15:03.214 "nsid": 1, 00:15:03.214 "bdev_name": "Null1", 00:15:03.214 "name": "Null1", 00:15:03.214 "nguid": "C2160E53EB814DB798F3955A24DEC2D5", 00:15:03.214 "uuid": "c2160e53-eb81-4db7-98f3-955a24dec2d5" 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:03.214 "subtype": "NVMe", 00:15:03.214 "listen_addresses": [ 00:15:03.214 { 00:15:03.214 "trtype": "TCP", 00:15:03.214 "adrfam": "IPv4", 00:15:03.214 "traddr": "10.0.0.2", 00:15:03.214 "trsvcid": "4420" 00:15:03.214 } 00:15:03.214 ], 00:15:03.214 "allow_any_host": true, 00:15:03.214 "hosts": [], 00:15:03.214 "serial_number": "SPDK00000000000002", 00:15:03.214 "model_number": "SPDK bdev Controller", 00:15:03.214 "max_namespaces": 32, 00:15:03.214 "min_cntlid": 1, 00:15:03.214 "max_cntlid": 65519, 00:15:03.214 "namespaces": [ 00:15:03.214 { 00:15:03.214 "nsid": 1, 00:15:03.214 "bdev_name": "Null2", 00:15:03.214 "name": "Null2", 00:15:03.214 "nguid": "AEB01D52FC91476CAEE68C0F21DD3B37", 00:15:03.214 "uuid": "aeb01d52-fc91-476c-aee6-8c0f21dd3b37" 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:03.214 "subtype": "NVMe", 00:15:03.214 "listen_addresses": [ 00:15:03.214 { 00:15:03.214 "trtype": "TCP", 00:15:03.214 "adrfam": "IPv4", 00:15:03.214 "traddr": "10.0.0.2", 00:15:03.214 "trsvcid": "4420" 00:15:03.214 } 00:15:03.214 ], 00:15:03.214 "allow_any_host": true, 00:15:03.214 "hosts": [], 00:15:03.214 "serial_number": "SPDK00000000000003", 00:15:03.214 "model_number": "SPDK bdev Controller", 00:15:03.214 "max_namespaces": 32, 00:15:03.214 "min_cntlid": 1, 00:15:03.214 "max_cntlid": 65519, 00:15:03.214 "namespaces": [ 00:15:03.214 { 00:15:03.214 "nsid": 1, 00:15:03.214 "bdev_name": "Null3", 00:15:03.214 "name": "Null3", 00:15:03.214 "nguid": "27BFDFA0FCE3438D8AC37DC742E8C619", 00:15:03.214 "uuid": "27bfdfa0-fce3-438d-8ac3-7dc742e8c619" 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 }, 00:15:03.214 { 00:15:03.214 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:03.214 "subtype": "NVMe", 00:15:03.214 "listen_addresses": [ 00:15:03.214 { 00:15:03.214 "trtype": "TCP", 00:15:03.214 "adrfam": "IPv4", 00:15:03.214 "traddr": "10.0.0.2", 00:15:03.214 "trsvcid": "4420" 00:15:03.214 } 00:15:03.214 ], 00:15:03.214 "allow_any_host": true, 00:15:03.214 "hosts": [], 00:15:03.214 "serial_number": "SPDK00000000000004", 00:15:03.214 "model_number": "SPDK bdev Controller", 00:15:03.214 "max_namespaces": 32, 00:15:03.214 "min_cntlid": 1, 00:15:03.214 "max_cntlid": 65519, 00:15:03.214 "namespaces": [ 00:15:03.214 { 00:15:03.214 "nsid": 1, 00:15:03.214 "bdev_name": "Null4", 00:15:03.214 "name": "Null4", 00:15:03.214 "nguid": "B3C0BC64412145F58DE0EC1403A8C186", 00:15:03.214 "uuid": "b3c0bc64-4121-45f5-8de0-ec1403a8c186" 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 } 00:15:03.214 ] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.214 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.214 rmmod nvme_tcp 00:15:03.215 rmmod nvme_fabrics 00:15:03.215 rmmod nvme_keyring 00:15:03.472 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1027706 ']' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1027706 ']' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1027706' 00:15:03.473 killing process with pid 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1027706 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.473 17:32:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.113 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:06.113 00:15:06.113 real 0m9.354s 00:15:06.113 user 0m5.463s 00:15:06.113 sys 0m4.898s 00:15:06.113 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.114 ************************************ 00:15:06.114 END TEST nvmf_target_discovery 00:15:06.114 ************************************ 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.114 ************************************ 00:15:06.114 START TEST nvmf_referrals 00:15:06.114 ************************************ 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:06.114 * Looking for test storage... 00:15:06.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.114 --rc genhtml_branch_coverage=1 00:15:06.114 --rc genhtml_function_coverage=1 00:15:06.114 --rc genhtml_legend=1 00:15:06.114 --rc geninfo_all_blocks=1 00:15:06.114 --rc geninfo_unexecuted_blocks=1 00:15:06.114 00:15:06.114 ' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.114 --rc genhtml_branch_coverage=1 00:15:06.114 --rc genhtml_function_coverage=1 00:15:06.114 --rc genhtml_legend=1 00:15:06.114 --rc geninfo_all_blocks=1 00:15:06.114 --rc geninfo_unexecuted_blocks=1 00:15:06.114 00:15:06.114 ' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.114 --rc genhtml_branch_coverage=1 00:15:06.114 --rc genhtml_function_coverage=1 00:15:06.114 --rc genhtml_legend=1 00:15:06.114 --rc geninfo_all_blocks=1 00:15:06.114 --rc geninfo_unexecuted_blocks=1 00:15:06.114 00:15:06.114 ' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:06.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.114 --rc genhtml_branch_coverage=1 00:15:06.114 --rc genhtml_function_coverage=1 00:15:06.114 --rc genhtml_legend=1 00:15:06.114 --rc geninfo_all_blocks=1 00:15:06.114 --rc geninfo_unexecuted_blocks=1 00:15:06.114 00:15:06.114 ' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:06.114 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:06.115 17:32:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:12.687 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:12.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:12.688 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:12.688 Found net devices under 0000:86:00.0: cvl_0_0 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:12.688 Found net devices under 0000:86:00.1: cvl_0_1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:12.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:15:12.688 00:15:12.688 --- 10.0.0.2 ping statistics --- 00:15:12.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.688 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:15:12.688 00:15:12.688 --- 10.0.0.1 ping statistics --- 00:15:12.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.688 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1031273 00:15:12.688 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1031273 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1031273 ']' 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.689 17:32:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 [2024-10-14 17:32:10.899441] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:15:12.689 [2024-10-14 17:32:10.899484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.689 [2024-10-14 17:32:10.973674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.689 [2024-10-14 17:32:11.014803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.689 [2024-10-14 17:32:11.014841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.689 [2024-10-14 17:32:11.014849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.689 [2024-10-14 17:32:11.014855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.689 [2024-10-14 17:32:11.014860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.689 [2024-10-14 17:32:11.016445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.689 [2024-10-14 17:32:11.016551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.689 [2024-10-14 17:32:11.016668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.689 [2024-10-14 17:32:11.016668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 [2024-10-14 17:32:11.166096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 [2024-10-14 17:32:11.179486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.689 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:12.690 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:12.949 17:32:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:13.207 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:13.208 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:13.208 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:13.208 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:13.208 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:13.208 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:13.467 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:13.726 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.985 17:32:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:13.985 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:14.245 rmmod nvme_tcp 00:15:14.245 rmmod nvme_fabrics 00:15:14.245 rmmod nvme_keyring 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1031273 ']' 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1031273 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1031273 ']' 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1031273 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1031273 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1031273' 00:15:14.245 killing process with pid 1031273 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1031273 00:15:14.245 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1031273 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.505 17:32:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:17.042 00:15:17.042 real 0m10.866s 00:15:17.042 user 0m12.492s 00:15:17.042 sys 0m5.260s 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:17.042 ************************************ 00:15:17.042 END TEST nvmf_referrals 00:15:17.042 ************************************ 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.042 ************************************ 00:15:17.042 START TEST nvmf_connect_disconnect 00:15:17.042 ************************************ 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:17.042 * Looking for test storage... 00:15:17.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:17.042 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.043 --rc genhtml_branch_coverage=1 00:15:17.043 --rc genhtml_function_coverage=1 00:15:17.043 --rc genhtml_legend=1 00:15:17.043 --rc geninfo_all_blocks=1 00:15:17.043 --rc geninfo_unexecuted_blocks=1 00:15:17.043 00:15:17.043 ' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.043 --rc genhtml_branch_coverage=1 00:15:17.043 --rc genhtml_function_coverage=1 00:15:17.043 --rc genhtml_legend=1 00:15:17.043 --rc geninfo_all_blocks=1 00:15:17.043 --rc geninfo_unexecuted_blocks=1 00:15:17.043 00:15:17.043 ' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.043 --rc genhtml_branch_coverage=1 00:15:17.043 --rc genhtml_function_coverage=1 00:15:17.043 --rc genhtml_legend=1 00:15:17.043 --rc geninfo_all_blocks=1 00:15:17.043 --rc geninfo_unexecuted_blocks=1 00:15:17.043 00:15:17.043 ' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.043 --rc genhtml_branch_coverage=1 00:15:17.043 --rc genhtml_function_coverage=1 00:15:17.043 --rc genhtml_legend=1 00:15:17.043 --rc geninfo_all_blocks=1 00:15:17.043 --rc geninfo_unexecuted_blocks=1 00:15:17.043 00:15:17.043 ' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:17.043 17:32:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:23.617 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:23.617 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:23.617 Found net devices under 0000:86:00.0: cvl_0_0 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:23.617 Found net devices under 0000:86:00.1: cvl_0_1 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.617 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:23.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:15:23.618 00:15:23.618 --- 10.0.0.2 ping statistics --- 00:15:23.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.618 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:15:23.618 00:15:23.618 --- 10.0.0.1 ping statistics --- 00:15:23.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.618 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1035362 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1035362 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1035362 ']' 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.618 17:32:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.618 [2024-10-14 17:32:21.950227] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:15:23.618 [2024-10-14 17:32:21.950269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.618 [2024-10-14 17:32:22.021712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.618 [2024-10-14 17:32:22.064410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.618 [2024-10-14 17:32:22.064444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.618 [2024-10-14 17:32:22.064451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.618 [2024-10-14 17:32:22.064457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.618 [2024-10-14 17:32:22.064462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.618 [2024-10-14 17:32:22.066051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.618 [2024-10-14 17:32:22.066160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.618 [2024-10-14 17:32:22.066192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.618 [2024-10-14 17:32:22.066193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.879 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.880 [2024-10-14 17:32:22.821417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:23.880 [2024-10-14 17:32:22.897190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:23.880 17:32:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:27.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.325 rmmod nvme_tcp 00:15:40.325 rmmod nvme_fabrics 00:15:40.325 rmmod nvme_keyring 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1035362 ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1035362 ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035362' 00:15:40.325 killing process with pid 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1035362 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.325 17:32:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.897 00:15:42.897 real 0m25.831s 00:15:42.897 user 1m10.792s 00:15:42.897 sys 0m5.873s 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.897 ************************************ 00:15:42.897 END TEST nvmf_connect_disconnect 00:15:42.897 ************************************ 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.897 ************************************ 00:15:42.897 START TEST nvmf_multitarget 00:15:42.897 ************************************ 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:42.897 * Looking for test storage... 00:15:42.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:42.897 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:42.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.898 --rc genhtml_branch_coverage=1 00:15:42.898 --rc genhtml_function_coverage=1 00:15:42.898 --rc genhtml_legend=1 00:15:42.898 --rc geninfo_all_blocks=1 00:15:42.898 --rc geninfo_unexecuted_blocks=1 00:15:42.898 00:15:42.898 ' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:42.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.898 --rc genhtml_branch_coverage=1 00:15:42.898 --rc genhtml_function_coverage=1 00:15:42.898 --rc genhtml_legend=1 00:15:42.898 --rc geninfo_all_blocks=1 00:15:42.898 --rc geninfo_unexecuted_blocks=1 00:15:42.898 00:15:42.898 ' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:42.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.898 --rc genhtml_branch_coverage=1 00:15:42.898 --rc genhtml_function_coverage=1 00:15:42.898 --rc genhtml_legend=1 00:15:42.898 --rc geninfo_all_blocks=1 00:15:42.898 --rc geninfo_unexecuted_blocks=1 00:15:42.898 00:15:42.898 ' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:42.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.898 --rc genhtml_branch_coverage=1 00:15:42.898 --rc genhtml_function_coverage=1 00:15:42.898 --rc genhtml_legend=1 00:15:42.898 --rc geninfo_all_blocks=1 00:15:42.898 --rc geninfo_unexecuted_blocks=1 00:15:42.898 00:15:42.898 ' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:42.898 17:32:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:49.470 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.470 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:49.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:49.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:49.471 Found net devices under 0000:86:00.0: cvl_0_0 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:49.471 Found net devices under 0000:86:00.1: cvl_0_1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:49.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:15:49.471 00:15:49.471 --- 10.0.0.2 ping statistics --- 00:15:49.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.471 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:15:49.471 00:15:49.471 --- 10.0.0.1 ping statistics --- 00:15:49.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.471 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.471 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1041753 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1041753 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1041753 ']' 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.472 17:32:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:49.472 [2024-10-14 17:32:47.838546] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:15:49.472 [2024-10-14 17:32:47.838590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.472 [2024-10-14 17:32:47.908230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.472 [2024-10-14 17:32:47.950648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.472 [2024-10-14 17:32:47.950687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.472 [2024-10-14 17:32:47.950694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.472 [2024-10-14 17:32:47.950700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.472 [2024-10-14 17:32:47.950705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.472 [2024-10-14 17:32:47.952275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.472 [2024-10-14 17:32:47.952406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.472 [2024-10-14 17:32:47.952509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.472 [2024-10-14 17:32:47.952510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:49.472 "nvmf_tgt_1" 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:49.472 "nvmf_tgt_2" 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:49.472 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:49.472 true 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:49.731 true 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:49.731 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.731 rmmod nvme_tcp 00:15:49.731 rmmod nvme_fabrics 00:15:49.731 rmmod nvme_keyring 00:15:49.989 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.989 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:49.989 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:49.989 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1041753 ']' 00:15:49.989 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1041753 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1041753 ']' 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1041753 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041753 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041753' 00:15:49.990 killing process with pid 1041753 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1041753 00:15:49.990 17:32:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1041753 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.990 17:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:52.526 00:15:52.526 real 0m9.604s 00:15:52.526 user 0m7.099s 00:15:52.526 sys 0m4.899s 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:52.526 ************************************ 00:15:52.526 END TEST nvmf_multitarget 00:15:52.526 ************************************ 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.526 ************************************ 00:15:52.526 START TEST nvmf_rpc 00:15:52.526 ************************************ 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.526 * Looking for test storage... 00:15:52.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.526 --rc genhtml_branch_coverage=1 00:15:52.526 --rc genhtml_function_coverage=1 00:15:52.526 --rc genhtml_legend=1 00:15:52.526 --rc geninfo_all_blocks=1 00:15:52.526 --rc geninfo_unexecuted_blocks=1 00:15:52.526 00:15:52.526 ' 00:15:52.526 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:52.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.526 --rc genhtml_branch_coverage=1 00:15:52.526 --rc genhtml_function_coverage=1 00:15:52.526 --rc genhtml_legend=1 00:15:52.526 --rc geninfo_all_blocks=1 00:15:52.526 --rc geninfo_unexecuted_blocks=1 00:15:52.526 00:15:52.526 ' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:52.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.527 --rc genhtml_branch_coverage=1 00:15:52.527 --rc genhtml_function_coverage=1 00:15:52.527 --rc genhtml_legend=1 00:15:52.527 --rc geninfo_all_blocks=1 00:15:52.527 --rc geninfo_unexecuted_blocks=1 00:15:52.527 00:15:52.527 ' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:52.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.527 --rc genhtml_branch_coverage=1 00:15:52.527 --rc genhtml_function_coverage=1 00:15:52.527 --rc genhtml_legend=1 00:15:52.527 --rc geninfo_all_blocks=1 00:15:52.527 --rc geninfo_unexecuted_blocks=1 00:15:52.527 00:15:52.527 ' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:52.527 17:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:59.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.102 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:59.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:59.103 Found net devices under 0000:86:00.0: cvl_0_0 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:59.103 Found net devices under 0000:86:00.1: cvl_0_1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:59.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:15:59.103 00:15:59.103 --- 10.0.0.2 ping statistics --- 00:15:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.103 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:15:59.103 00:15:59.103 --- 10.0.0.1 ping statistics --- 00:15:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.103 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1045534 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1045534 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1045534 ']' 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.103 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.104 [2024-10-14 17:32:57.510544] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:15:59.104 [2024-10-14 17:32:57.510588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.104 [2024-10-14 17:32:57.580539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.104 [2024-10-14 17:32:57.620468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.104 [2024-10-14 17:32:57.620510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.104 [2024-10-14 17:32:57.620517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.104 [2024-10-14 17:32:57.620523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.104 [2024-10-14 17:32:57.620528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.104 [2024-10-14 17:32:57.622098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.104 [2024-10-14 17:32:57.622206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.104 [2024-10-14 17:32:57.622315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.104 [2024-10-14 17:32:57.622316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:59.104 "tick_rate": 2100000000, 00:15:59.104 "poll_groups": [ 00:15:59.104 { 00:15:59.104 "name": "nvmf_tgt_poll_group_000", 00:15:59.104 "admin_qpairs": 0, 00:15:59.104 "io_qpairs": 0, 00:15:59.104 "current_admin_qpairs": 0, 00:15:59.104 "current_io_qpairs": 0, 00:15:59.104 "pending_bdev_io": 0, 00:15:59.104 "completed_nvme_io": 0, 00:15:59.104 "transports": [] 00:15:59.104 }, 00:15:59.104 { 00:15:59.104 "name": "nvmf_tgt_poll_group_001", 00:15:59.104 "admin_qpairs": 0, 00:15:59.104 "io_qpairs": 0, 00:15:59.104 "current_admin_qpairs": 0, 00:15:59.104 "current_io_qpairs": 0, 00:15:59.104 "pending_bdev_io": 0, 00:15:59.104 "completed_nvme_io": 0, 00:15:59.104 "transports": [] 00:15:59.104 }, 00:15:59.104 { 00:15:59.104 "name": "nvmf_tgt_poll_group_002", 00:15:59.104 "admin_qpairs": 0, 00:15:59.104 "io_qpairs": 0, 00:15:59.104 "current_admin_qpairs": 0, 00:15:59.104 "current_io_qpairs": 0, 00:15:59.104 "pending_bdev_io": 0, 00:15:59.104 "completed_nvme_io": 0, 00:15:59.104 "transports": [] 00:15:59.104 }, 00:15:59.104 { 00:15:59.104 "name": "nvmf_tgt_poll_group_003", 00:15:59.104 "admin_qpairs": 0, 00:15:59.104 "io_qpairs": 0, 00:15:59.104 "current_admin_qpairs": 0, 00:15:59.104 "current_io_qpairs": 0, 00:15:59.104 "pending_bdev_io": 0, 00:15:59.104 "completed_nvme_io": 0, 00:15:59.104 "transports": [] 00:15:59.104 } 00:15:59.104 ] 00:15:59.104 }' 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.104 [2024-10-14 17:32:57.876031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.104 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:59.104 "tick_rate": 2100000000, 00:15:59.104 "poll_groups": [ 00:15:59.104 { 00:15:59.104 "name": "nvmf_tgt_poll_group_000", 00:15:59.104 "admin_qpairs": 0, 00:15:59.104 "io_qpairs": 0, 00:15:59.104 "current_admin_qpairs": 0, 00:15:59.104 "current_io_qpairs": 0, 00:15:59.104 "pending_bdev_io": 0, 00:15:59.104 "completed_nvme_io": 0, 00:15:59.104 "transports": [ 00:15:59.104 { 00:15:59.104 "trtype": "TCP" 00:15:59.104 } 00:15:59.105 ] 00:15:59.105 }, 00:15:59.105 { 00:15:59.105 "name": "nvmf_tgt_poll_group_001", 00:15:59.105 "admin_qpairs": 0, 00:15:59.105 "io_qpairs": 0, 00:15:59.105 "current_admin_qpairs": 0, 00:15:59.105 "current_io_qpairs": 0, 00:15:59.105 "pending_bdev_io": 0, 00:15:59.105 "completed_nvme_io": 0, 00:15:59.105 "transports": [ 00:15:59.105 { 00:15:59.105 "trtype": "TCP" 00:15:59.105 } 00:15:59.105 ] 00:15:59.105 }, 00:15:59.105 { 00:15:59.105 "name": "nvmf_tgt_poll_group_002", 00:15:59.105 "admin_qpairs": 0, 00:15:59.105 "io_qpairs": 0, 00:15:59.105 "current_admin_qpairs": 0, 00:15:59.105 "current_io_qpairs": 0, 00:15:59.105 "pending_bdev_io": 0, 00:15:59.105 "completed_nvme_io": 0, 00:15:59.105 "transports": [ 00:15:59.105 { 00:15:59.105 "trtype": "TCP" 00:15:59.105 } 00:15:59.105 ] 00:15:59.105 }, 00:15:59.105 { 00:15:59.105 "name": "nvmf_tgt_poll_group_003", 00:15:59.105 "admin_qpairs": 0, 00:15:59.105 "io_qpairs": 0, 00:15:59.105 "current_admin_qpairs": 0, 00:15:59.105 "current_io_qpairs": 0, 00:15:59.105 "pending_bdev_io": 0, 00:15:59.105 "completed_nvme_io": 0, 00:15:59.105 "transports": [ 00:15:59.105 { 00:15:59.105 "trtype": "TCP" 00:15:59.105 } 00:15:59.105 ] 00:15:59.105 } 00:15:59.105 ] 00:15:59.105 }' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.105 17:32:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 Malloc1 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.105 [2024-10-14 17:32:58.058457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:59.105 [2024-10-14 17:32:58.087037] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:15:59.105 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:59.105 could not add new controller: failed to write to nvme-fabrics device 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:59.105 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.106 17:32:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.490 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.490 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:00.490 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.490 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:00.490 17:32:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.395 [2024-10-14 17:33:01.390829] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:16:02.395 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:02.395 could not add new controller: failed to write to nvme-fabrics device 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.395 17:33:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.775 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.775 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:03.775 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.775 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:03.775 17:33:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 [2024-10-14 17:33:04.757429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.681 17:33:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.060 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.060 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.060 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.060 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.060 17:33:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:08.977 17:33:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.977 [2024-10-14 17:33:08.043901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.977 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.978 17:33:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.355 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:10.356 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.356 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.356 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:10.356 17:33:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 [2024-10-14 17:33:11.344255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.342 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:12.343 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.343 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.343 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.343 17:33:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.316 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:13.316 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.316 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.316 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:13.575 17:33:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.480 [2024-10-14 17:33:14.608687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.480 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.739 17:33:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.676 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.676 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:16.676 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.676 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:16.676 17:33:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 [2024-10-14 17:33:17.969294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 17:33:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.148 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.148 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:20.148 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.148 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:20.148 17:33:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:22.053 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 [2024-10-14 17:33:21.328563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 [2024-10-14 17:33:21.376679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.313 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 [2024-10-14 17:33:21.424821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.314 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 [2024-10-14 17:33:21.472993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 [2024-10-14 17:33:21.521154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:22.574 "tick_rate": 2100000000, 00:16:22.574 "poll_groups": [ 00:16:22.574 { 00:16:22.574 "name": "nvmf_tgt_poll_group_000", 00:16:22.574 "admin_qpairs": 2, 00:16:22.574 "io_qpairs": 168, 00:16:22.574 "current_admin_qpairs": 0, 00:16:22.574 "current_io_qpairs": 0, 00:16:22.574 "pending_bdev_io": 0, 00:16:22.574 "completed_nvme_io": 268, 00:16:22.574 "transports": [ 00:16:22.574 { 00:16:22.574 "trtype": "TCP" 00:16:22.574 } 00:16:22.574 ] 00:16:22.574 }, 00:16:22.574 { 00:16:22.574 "name": "nvmf_tgt_poll_group_001", 00:16:22.574 "admin_qpairs": 2, 00:16:22.574 "io_qpairs": 168, 00:16:22.574 "current_admin_qpairs": 0, 00:16:22.574 "current_io_qpairs": 0, 00:16:22.574 "pending_bdev_io": 0, 00:16:22.574 "completed_nvme_io": 268, 00:16:22.574 "transports": [ 00:16:22.574 { 00:16:22.574 "trtype": "TCP" 00:16:22.574 } 00:16:22.574 ] 00:16:22.574 }, 00:16:22.574 { 00:16:22.574 "name": "nvmf_tgt_poll_group_002", 00:16:22.574 "admin_qpairs": 1, 00:16:22.574 "io_qpairs": 168, 00:16:22.574 "current_admin_qpairs": 0, 00:16:22.574 "current_io_qpairs": 0, 00:16:22.574 "pending_bdev_io": 0, 00:16:22.574 "completed_nvme_io": 168, 00:16:22.574 "transports": [ 00:16:22.574 { 00:16:22.574 "trtype": "TCP" 00:16:22.574 } 00:16:22.574 ] 00:16:22.574 }, 00:16:22.574 { 00:16:22.574 "name": "nvmf_tgt_poll_group_003", 00:16:22.574 "admin_qpairs": 2, 00:16:22.574 "io_qpairs": 168, 00:16:22.574 "current_admin_qpairs": 0, 00:16:22.574 "current_io_qpairs": 0, 00:16:22.574 "pending_bdev_io": 0, 00:16:22.574 "completed_nvme_io": 318, 00:16:22.574 "transports": [ 00:16:22.574 { 00:16:22.574 "trtype": "TCP" 00:16:22.574 } 00:16:22.574 ] 00:16:22.574 } 00:16:22.574 ] 00:16:22.574 }' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.574 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.574 rmmod nvme_tcp 00:16:22.574 rmmod nvme_fabrics 00:16:22.574 rmmod nvme_keyring 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1045534 ']' 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1045534 ']' 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045534' 00:16:22.834 killing process with pid 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1045534 00:16:22.834 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.093 17:33:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.003 00:16:25.003 real 0m32.800s 00:16:25.003 user 1m38.681s 00:16:25.003 sys 0m6.550s 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.003 ************************************ 00:16:25.003 END TEST nvmf_rpc 00:16:25.003 ************************************ 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.003 ************************************ 00:16:25.003 START TEST nvmf_invalid 00:16:25.003 ************************************ 00:16:25.003 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:25.263 * Looking for test storage... 00:16:25.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.263 --rc genhtml_branch_coverage=1 00:16:25.263 --rc genhtml_function_coverage=1 00:16:25.263 --rc genhtml_legend=1 00:16:25.263 --rc geninfo_all_blocks=1 00:16:25.263 --rc geninfo_unexecuted_blocks=1 00:16:25.263 00:16:25.263 ' 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.263 --rc genhtml_branch_coverage=1 00:16:25.263 --rc genhtml_function_coverage=1 00:16:25.263 --rc genhtml_legend=1 00:16:25.263 --rc geninfo_all_blocks=1 00:16:25.263 --rc geninfo_unexecuted_blocks=1 00:16:25.263 00:16:25.263 ' 00:16:25.263 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:25.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.264 --rc genhtml_branch_coverage=1 00:16:25.264 --rc genhtml_function_coverage=1 00:16:25.264 --rc genhtml_legend=1 00:16:25.264 --rc geninfo_all_blocks=1 00:16:25.264 --rc geninfo_unexecuted_blocks=1 00:16:25.264 00:16:25.264 ' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:25.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.264 --rc genhtml_branch_coverage=1 00:16:25.264 --rc genhtml_function_coverage=1 00:16:25.264 --rc genhtml_legend=1 00:16:25.264 --rc geninfo_all_blocks=1 00:16:25.264 --rc geninfo_unexecuted_blocks=1 00:16:25.264 00:16:25.264 ' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.264 17:33:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.836 17:33:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:31.836 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:31.836 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:31.836 Found net devices under 0000:86:00.0: cvl_0_0 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:31.836 Found net devices under 0000:86:00.1: cvl_0_1 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.836 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:31.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:16:31.837 00:16:31.837 --- 10.0.0.2 ping statistics --- 00:16:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.837 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:16:31.837 00:16:31.837 --- 10.0.0.1 ping statistics --- 00:16:31.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.837 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1053779 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1053779 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1053779 ']' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.837 [2024-10-14 17:33:30.361072] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:16:31.837 [2024-10-14 17:33:30.361117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.837 [2024-10-14 17:33:30.433677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.837 [2024-10-14 17:33:30.476115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.837 [2024-10-14 17:33:30.476151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.837 [2024-10-14 17:33:30.476159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.837 [2024-10-14 17:33:30.476165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.837 [2024-10-14 17:33:30.476170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.837 [2024-10-14 17:33:30.477593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.837 [2024-10-14 17:33:30.477632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.837 [2024-10-14 17:33:30.477724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.837 [2024-10-14 17:33:30.477724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14276 00:16:31.837 [2024-10-14 17:33:30.786739] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:31.837 { 00:16:31.837 "nqn": "nqn.2016-06.io.spdk:cnode14276", 00:16:31.837 "tgt_name": "foobar", 00:16:31.837 "method": "nvmf_create_subsystem", 00:16:31.837 "req_id": 1 00:16:31.837 } 00:16:31.837 Got JSON-RPC error response 00:16:31.837 response: 00:16:31.837 { 00:16:31.837 "code": -32603, 00:16:31.837 "message": "Unable to find target foobar" 00:16:31.837 }' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:31.837 { 00:16:31.837 "nqn": "nqn.2016-06.io.spdk:cnode14276", 00:16:31.837 "tgt_name": "foobar", 00:16:31.837 "method": "nvmf_create_subsystem", 00:16:31.837 "req_id": 1 00:16:31.837 } 00:16:31.837 Got JSON-RPC error response 00:16:31.837 response: 00:16:31.837 { 00:16:31.837 "code": -32603, 00:16:31.837 "message": "Unable to find target foobar" 00:16:31.837 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:31.837 17:33:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5076 00:16:32.096 [2024-10-14 17:33:30.987437] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5076: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:32.096 { 00:16:32.096 "nqn": "nqn.2016-06.io.spdk:cnode5076", 00:16:32.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:32.096 "method": "nvmf_create_subsystem", 00:16:32.096 "req_id": 1 00:16:32.096 } 00:16:32.096 Got JSON-RPC error response 00:16:32.096 response: 00:16:32.096 { 00:16:32.096 "code": -32602, 00:16:32.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:32.096 }' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:32.096 { 00:16:32.096 "nqn": "nqn.2016-06.io.spdk:cnode5076", 00:16:32.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:32.096 "method": "nvmf_create_subsystem", 00:16:32.096 "req_id": 1 00:16:32.096 } 00:16:32.096 Got JSON-RPC error response 00:16:32.096 response: 00:16:32.096 { 00:16:32.096 "code": -32602, 00:16:32.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:32.096 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13020 00:16:32.096 [2024-10-14 17:33:31.188080] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13020: invalid model number 'SPDK_Controller' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:32.096 { 00:16:32.096 "nqn": "nqn.2016-06.io.spdk:cnode13020", 00:16:32.096 "model_number": "SPDK_Controller\u001f", 00:16:32.096 "method": "nvmf_create_subsystem", 00:16:32.096 "req_id": 1 00:16:32.096 } 00:16:32.096 Got JSON-RPC error response 00:16:32.096 response: 00:16:32.096 { 00:16:32.096 "code": -32602, 00:16:32.096 "message": "Invalid MN SPDK_Controller\u001f" 00:16:32.096 }' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:32.096 { 00:16:32.096 "nqn": "nqn.2016-06.io.spdk:cnode13020", 00:16:32.096 "model_number": "SPDK_Controller\u001f", 00:16:32.096 "method": "nvmf_create_subsystem", 00:16:32.096 "req_id": 1 00:16:32.096 } 00:16:32.096 Got JSON-RPC error response 00:16:32.096 response: 00:16:32.096 { 00:16:32.096 "code": -32602, 00:16:32.096 "message": "Invalid MN SPDK_Controller\u001f" 00:16:32.096 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.096 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.097 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:32.097 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:32.097 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ')!egD}RPXH$_jp9oOroa' 00:16:32.356 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')!egD}RPXH$_jp9oOroa' nqn.2016-06.io.spdk:cnode3136 00:16:32.616 [2024-10-14 17:33:31.525241] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3136: invalid serial number ')!egD}RPXH$_jp9oOroa' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:32.616 { 00:16:32.616 "nqn": "nqn.2016-06.io.spdk:cnode3136", 00:16:32.616 "serial_number": ")!egD}RPXH$_\u007fjp9oOroa", 00:16:32.616 "method": "nvmf_create_subsystem", 00:16:32.616 "req_id": 1 00:16:32.616 } 00:16:32.616 Got JSON-RPC error response 00:16:32.616 response: 00:16:32.616 { 00:16:32.616 "code": -32602, 00:16:32.616 "message": "Invalid SN )!egD}RPXH$_\u007fjp9oOroa" 00:16:32.616 }' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:32.616 { 00:16:32.616 "nqn": "nqn.2016-06.io.spdk:cnode3136", 00:16:32.616 "serial_number": ")!egD}RPXH$_\u007fjp9oOroa", 00:16:32.616 "method": "nvmf_create_subsystem", 00:16:32.616 "req_id": 1 00:16:32.616 } 00:16:32.616 Got JSON-RPC error response 00:16:32.616 response: 00:16:32.616 { 00:16:32.616 "code": -32602, 00:16:32.616 "message": "Invalid SN )!egD}RPXH$_\u007fjp9oOroa" 00:16:32.616 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:32.616 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.617 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:32.876 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1Gk1~_N0iVO&+jeQI%~&838-{RsQx68PUeC~D@m7' 00:16:32.877 17:33:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1Gk1~_N0iVO&+jeQI%~&838-{RsQx68PUeC~D@m7' nqn.2016-06.io.spdk:cnode8086 00:16:32.877 [2024-10-14 17:33:31.994836] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8086: invalid model number '1Gk1~_N0iVO&+jeQI%~&838-{RsQx68PUeC~D@m7' 00:16:33.136 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:33.136 { 00:16:33.136 "nqn": "nqn.2016-06.io.spdk:cnode8086", 00:16:33.136 "model_number": "1Gk1~_N0iVO&+jeQI%~&838-\u007f{RsQx68PUeC~D@m7", 00:16:33.136 "method": "nvmf_create_subsystem", 00:16:33.136 "req_id": 1 00:16:33.136 } 00:16:33.136 Got JSON-RPC error response 00:16:33.136 response: 00:16:33.136 { 00:16:33.136 "code": -32602, 00:16:33.136 "message": "Invalid MN 1Gk1~_N0iVO&+jeQI%~&838-\u007f{RsQx68PUeC~D@m7" 00:16:33.136 }' 00:16:33.136 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:33.136 { 00:16:33.136 "nqn": "nqn.2016-06.io.spdk:cnode8086", 00:16:33.136 "model_number": "1Gk1~_N0iVO&+jeQI%~&838-\u007f{RsQx68PUeC~D@m7", 00:16:33.136 "method": "nvmf_create_subsystem", 00:16:33.136 "req_id": 1 00:16:33.136 } 00:16:33.136 Got JSON-RPC error response 00:16:33.136 response: 00:16:33.136 { 00:16:33.136 "code": -32602, 00:16:33.136 "message": "Invalid MN 1Gk1~_N0iVO&+jeQI%~&838-\u007f{RsQx68PUeC~D@m7" 00:16:33.136 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:33.136 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:33.136 [2024-10-14 17:33:32.191540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.136 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:33.396 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:33.396 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:33.396 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:33.396 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:33.396 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:33.656 [2024-10-14 17:33:32.588868] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:33.656 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:33.656 { 00:16:33.656 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:33.656 "listen_address": { 00:16:33.656 "trtype": "tcp", 00:16:33.656 "traddr": "", 00:16:33.656 "trsvcid": "4421" 00:16:33.656 }, 00:16:33.656 "method": "nvmf_subsystem_remove_listener", 00:16:33.656 "req_id": 1 00:16:33.656 } 00:16:33.656 Got JSON-RPC error response 00:16:33.656 response: 00:16:33.656 { 00:16:33.656 "code": -32602, 00:16:33.656 "message": "Invalid parameters" 00:16:33.656 }' 00:16:33.656 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:33.656 { 00:16:33.656 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:33.656 "listen_address": { 00:16:33.656 "trtype": "tcp", 00:16:33.656 "traddr": "", 00:16:33.656 "trsvcid": "4421" 00:16:33.656 }, 00:16:33.656 "method": "nvmf_subsystem_remove_listener", 00:16:33.656 "req_id": 1 00:16:33.656 } 00:16:33.656 Got JSON-RPC error response 00:16:33.656 response: 00:16:33.656 { 00:16:33.656 "code": -32602, 00:16:33.656 "message": "Invalid parameters" 00:16:33.656 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:33.656 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6920 -i 0 00:16:33.656 [2024-10-14 17:33:32.789473] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6920: invalid cntlid range [0-65519] 00:16:33.915 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:33.915 { 00:16:33.915 "nqn": "nqn.2016-06.io.spdk:cnode6920", 00:16:33.915 "min_cntlid": 0, 00:16:33.915 "method": "nvmf_create_subsystem", 00:16:33.915 "req_id": 1 00:16:33.915 } 00:16:33.915 Got JSON-RPC error response 00:16:33.915 response: 00:16:33.915 { 00:16:33.915 "code": -32602, 00:16:33.916 "message": "Invalid cntlid range [0-65519]" 00:16:33.916 }' 00:16:33.916 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:33.916 { 00:16:33.916 "nqn": "nqn.2016-06.io.spdk:cnode6920", 00:16:33.916 "min_cntlid": 0, 00:16:33.916 "method": "nvmf_create_subsystem", 00:16:33.916 "req_id": 1 00:16:33.916 } 00:16:33.916 Got JSON-RPC error response 00:16:33.916 response: 00:16:33.916 { 00:16:33.916 "code": -32602, 00:16:33.916 "message": "Invalid cntlid range [0-65519]" 00:16:33.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:33.916 17:33:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13346 -i 65520 00:16:33.916 [2024-10-14 17:33:32.998185] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13346: invalid cntlid range [65520-65519] 00:16:33.916 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:33.916 { 00:16:33.916 "nqn": "nqn.2016-06.io.spdk:cnode13346", 00:16:33.916 "min_cntlid": 65520, 00:16:33.916 "method": "nvmf_create_subsystem", 00:16:33.916 "req_id": 1 00:16:33.916 } 00:16:33.916 Got JSON-RPC error response 00:16:33.916 response: 00:16:33.916 { 00:16:33.916 "code": -32602, 00:16:33.916 "message": "Invalid cntlid range [65520-65519]" 00:16:33.916 }' 00:16:33.916 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:33.916 { 00:16:33.916 "nqn": "nqn.2016-06.io.spdk:cnode13346", 00:16:33.916 "min_cntlid": 65520, 00:16:33.916 "method": "nvmf_create_subsystem", 00:16:33.916 "req_id": 1 00:16:33.916 } 00:16:33.916 Got JSON-RPC error response 00:16:33.916 response: 00:16:33.916 { 00:16:33.916 "code": -32602, 00:16:33.916 "message": "Invalid cntlid range [65520-65519]" 00:16:33.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:33.916 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8952 -I 0 00:16:34.175 [2024-10-14 17:33:33.210904] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8952: invalid cntlid range [1-0] 00:16:34.175 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:34.175 { 00:16:34.175 "nqn": "nqn.2016-06.io.spdk:cnode8952", 00:16:34.175 "max_cntlid": 0, 00:16:34.175 "method": "nvmf_create_subsystem", 00:16:34.175 "req_id": 1 00:16:34.175 } 00:16:34.175 Got JSON-RPC error response 00:16:34.175 response: 00:16:34.175 { 00:16:34.175 "code": -32602, 00:16:34.175 "message": "Invalid cntlid range [1-0]" 00:16:34.175 }' 00:16:34.175 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:34.175 { 00:16:34.175 "nqn": "nqn.2016-06.io.spdk:cnode8952", 00:16:34.175 "max_cntlid": 0, 00:16:34.175 "method": "nvmf_create_subsystem", 00:16:34.175 "req_id": 1 00:16:34.175 } 00:16:34.175 Got JSON-RPC error response 00:16:34.175 response: 00:16:34.175 { 00:16:34.175 "code": -32602, 00:16:34.175 "message": "Invalid cntlid range [1-0]" 00:16:34.175 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:34.175 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode932 -I 65520 00:16:34.434 [2024-10-14 17:33:33.407523] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode932: invalid cntlid range [1-65520] 00:16:34.434 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:34.434 { 00:16:34.434 "nqn": "nqn.2016-06.io.spdk:cnode932", 00:16:34.434 "max_cntlid": 65520, 00:16:34.434 "method": "nvmf_create_subsystem", 00:16:34.434 "req_id": 1 00:16:34.434 } 00:16:34.434 Got JSON-RPC error response 00:16:34.434 response: 00:16:34.434 { 00:16:34.434 "code": -32602, 00:16:34.434 "message": "Invalid cntlid range [1-65520]" 00:16:34.434 }' 00:16:34.434 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:34.434 { 00:16:34.434 "nqn": "nqn.2016-06.io.spdk:cnode932", 00:16:34.434 "max_cntlid": 65520, 00:16:34.434 "method": "nvmf_create_subsystem", 00:16:34.434 "req_id": 1 00:16:34.434 } 00:16:34.434 Got JSON-RPC error response 00:16:34.434 response: 00:16:34.434 { 00:16:34.434 "code": -32602, 00:16:34.434 "message": "Invalid cntlid range [1-65520]" 00:16:34.434 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:34.434 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4756 -i 6 -I 5 00:16:34.693 [2024-10-14 17:33:33.604239] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4756: invalid cntlid range [6-5] 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:34.693 { 00:16:34.693 "nqn": "nqn.2016-06.io.spdk:cnode4756", 00:16:34.693 "min_cntlid": 6, 00:16:34.693 "max_cntlid": 5, 00:16:34.693 "method": "nvmf_create_subsystem", 00:16:34.693 "req_id": 1 00:16:34.693 } 00:16:34.693 Got JSON-RPC error response 00:16:34.693 response: 00:16:34.693 { 00:16:34.693 "code": -32602, 00:16:34.693 "message": "Invalid cntlid range [6-5]" 00:16:34.693 }' 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:34.693 { 00:16:34.693 "nqn": "nqn.2016-06.io.spdk:cnode4756", 00:16:34.693 "min_cntlid": 6, 00:16:34.693 "max_cntlid": 5, 00:16:34.693 "method": "nvmf_create_subsystem", 00:16:34.693 "req_id": 1 00:16:34.693 } 00:16:34.693 Got JSON-RPC error response 00:16:34.693 response: 00:16:34.693 { 00:16:34.693 "code": -32602, 00:16:34.693 "message": "Invalid cntlid range [6-5]" 00:16:34.693 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:34.693 { 00:16:34.693 "name": "foobar", 00:16:34.693 "method": "nvmf_delete_target", 00:16:34.693 "req_id": 1 00:16:34.693 } 00:16:34.693 Got JSON-RPC error response 00:16:34.693 response: 00:16:34.693 { 00:16:34.693 "code": -32602, 00:16:34.693 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:34.693 }' 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:34.693 { 00:16:34.693 "name": "foobar", 00:16:34.693 "method": "nvmf_delete_target", 00:16:34.693 "req_id": 1 00:16:34.693 } 00:16:34.693 Got JSON-RPC error response 00:16:34.693 response: 00:16:34.693 { 00:16:34.693 "code": -32602, 00:16:34.693 "message": "The specified target doesn't exist, cannot delete it." 00:16:34.693 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.693 rmmod nvme_tcp 00:16:34.693 rmmod nvme_fabrics 00:16:34.693 rmmod nvme_keyring 00:16:34.693 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1053779 ']' 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1053779 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1053779 ']' 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1053779 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.694 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1053779 00:16:34.953 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.953 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.953 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1053779' 00:16:34.953 killing process with pid 1053779 00:16:34.953 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1053779 00:16:34.953 17:33:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1053779 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.953 17:33:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.491 00:16:37.491 real 0m11.990s 00:16:37.491 user 0m18.451s 00:16:37.491 sys 0m5.360s 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:37.491 ************************************ 00:16:37.491 END TEST nvmf_invalid 00:16:37.491 ************************************ 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.491 ************************************ 00:16:37.491 START TEST nvmf_connect_stress 00:16:37.491 ************************************ 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:37.491 * Looking for test storage... 00:16:37.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:37.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.491 --rc genhtml_branch_coverage=1 00:16:37.491 --rc genhtml_function_coverage=1 00:16:37.491 --rc genhtml_legend=1 00:16:37.491 --rc geninfo_all_blocks=1 00:16:37.491 --rc geninfo_unexecuted_blocks=1 00:16:37.491 00:16:37.491 ' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:37.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.491 --rc genhtml_branch_coverage=1 00:16:37.491 --rc genhtml_function_coverage=1 00:16:37.491 --rc genhtml_legend=1 00:16:37.491 --rc geninfo_all_blocks=1 00:16:37.491 --rc geninfo_unexecuted_blocks=1 00:16:37.491 00:16:37.491 ' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:37.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.491 --rc genhtml_branch_coverage=1 00:16:37.491 --rc genhtml_function_coverage=1 00:16:37.491 --rc genhtml_legend=1 00:16:37.491 --rc geninfo_all_blocks=1 00:16:37.491 --rc geninfo_unexecuted_blocks=1 00:16:37.491 00:16:37.491 ' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:37.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.491 --rc genhtml_branch_coverage=1 00:16:37.491 --rc genhtml_function_coverage=1 00:16:37.491 --rc genhtml_legend=1 00:16:37.491 --rc geninfo_all_blocks=1 00:16:37.491 --rc geninfo_unexecuted_blocks=1 00:16:37.491 00:16:37.491 ' 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.491 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.492 17:33:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:44.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:44.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:44.061 Found net devices under 0000:86:00.0: cvl_0_0 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:44.061 Found net devices under 0000:86:00.1: cvl_0_1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:44.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:16:44.061 00:16:44.061 --- 10.0.0.2 ping statistics --- 00:16:44.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.061 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:16:44.061 00:16:44.061 --- 10.0.0.1 ping statistics --- 00:16:44.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.061 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1058040 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1058040 00:16:44.061 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1058040 ']' 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.062 17:33:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.062 [2024-10-14 17:33:42.456672] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:16:44.062 [2024-10-14 17:33:42.456729] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.062 [2024-10-14 17:33:42.529721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.062 [2024-10-14 17:33:42.572490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.062 [2024-10-14 17:33:42.572527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.062 [2024-10-14 17:33:42.572533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.062 [2024-10-14 17:33:42.572540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.062 [2024-10-14 17:33:42.572546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.062 [2024-10-14 17:33:42.573968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.062 [2024-10-14 17:33:42.574071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.062 [2024-10-14 17:33:42.574072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 [2024-10-14 17:33:43.343971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 [2024-10-14 17:33:43.364217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.320 NULL1 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1058289 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.320 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.609 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.866 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.866 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:44.866 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.866 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.866 17:33:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.124 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.124 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:45.124 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.124 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.124 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.382 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.382 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:45.382 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.382 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.382 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.640 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.640 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:45.640 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.640 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.640 17:33:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.205 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.205 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:46.205 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.205 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.205 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:46.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.464 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.722 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.722 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:46.722 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.722 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.722 17:33:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.981 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.981 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:46.981 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.981 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.981 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.547 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.547 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:47.547 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.547 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.547 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.805 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.805 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:47.805 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.806 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.806 17:33:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.064 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.064 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:48.064 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.064 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.064 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.322 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.322 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:48.322 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.322 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.322 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.582 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.582 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:48.582 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.582 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.582 17:33:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.148 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.148 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:49.148 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.148 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.148 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.406 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.406 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:49.406 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.406 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.406 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.665 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.665 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:49.665 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.665 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.665 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.923 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.923 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:49.923 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.923 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.923 17:33:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.183 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.183 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:50.183 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.183 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.183 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.750 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.750 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:50.750 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.750 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.750 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.008 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.008 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:51.008 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.008 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.008 17:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.266 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.266 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:51.266 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.266 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.266 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.524 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.524 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:51.525 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.525 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.525 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.091 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.091 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:52.091 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.091 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.091 17:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.350 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.350 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:52.350 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.350 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.350 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.609 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.609 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:52.609 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.609 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.609 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.867 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.867 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:52.867 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.867 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.867 17:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.126 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.126 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:53.126 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.126 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.126 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.693 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.693 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:53.693 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.693 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.693 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.952 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.952 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:53.952 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.952 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.952 17:33:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.210 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.210 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:54.210 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.210 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.210 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.469 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1058289 00:16:54.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1058289) - No such process 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1058289 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.469 rmmod nvme_tcp 00:16:54.469 rmmod nvme_fabrics 00:16:54.469 rmmod nvme_keyring 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1058040 ']' 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1058040 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1058040 ']' 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1058040 00:16:54.469 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1058040 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1058040' 00:16:54.729 killing process with pid 1058040 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1058040 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1058040 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.729 17:33:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.265 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:57.265 00:16:57.265 real 0m19.703s 00:16:57.265 user 0m41.230s 00:16:57.265 sys 0m8.716s 00:16:57.265 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.265 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.265 ************************************ 00:16:57.265 END TEST nvmf_connect_stress 00:16:57.265 ************************************ 00:16:57.266 17:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:57.266 17:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:57.266 17:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.266 17:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.266 ************************************ 00:16:57.266 START TEST nvmf_fused_ordering 00:16:57.266 ************************************ 00:16:57.266 17:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:57.266 * Looking for test storage... 00:16:57.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.266 --rc genhtml_branch_coverage=1 00:16:57.266 --rc genhtml_function_coverage=1 00:16:57.266 --rc genhtml_legend=1 00:16:57.266 --rc geninfo_all_blocks=1 00:16:57.266 --rc geninfo_unexecuted_blocks=1 00:16:57.266 00:16:57.266 ' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.266 --rc genhtml_branch_coverage=1 00:16:57.266 --rc genhtml_function_coverage=1 00:16:57.266 --rc genhtml_legend=1 00:16:57.266 --rc geninfo_all_blocks=1 00:16:57.266 --rc geninfo_unexecuted_blocks=1 00:16:57.266 00:16:57.266 ' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.266 --rc genhtml_branch_coverage=1 00:16:57.266 --rc genhtml_function_coverage=1 00:16:57.266 --rc genhtml_legend=1 00:16:57.266 --rc geninfo_all_blocks=1 00:16:57.266 --rc geninfo_unexecuted_blocks=1 00:16:57.266 00:16:57.266 ' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.266 --rc genhtml_branch_coverage=1 00:16:57.266 --rc genhtml_function_coverage=1 00:16:57.266 --rc genhtml_legend=1 00:16:57.266 --rc geninfo_all_blocks=1 00:16:57.266 --rc geninfo_unexecuted_blocks=1 00:16:57.266 00:16:57.266 ' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.266 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.267 17:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:03.838 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:03.838 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:03.838 Found net devices under 0000:86:00.0: cvl_0_0 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:03.838 Found net devices under 0000:86:00.1: cvl_0_1 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.838 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.839 17:34:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:17:03.839 00:17:03.839 --- 10.0.0.2 ping statistics --- 00:17:03.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.839 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:17:03.839 00:17:03.839 --- 10.0.0.1 ping statistics --- 00:17:03.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.839 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1063443 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1063443 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1063443 ']' 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 [2024-10-14 17:34:02.216764] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:03.839 [2024-10-14 17:34:02.216812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.839 [2024-10-14 17:34:02.288464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.839 [2024-10-14 17:34:02.330091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.839 [2024-10-14 17:34:02.330125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.839 [2024-10-14 17:34:02.330132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.839 [2024-10-14 17:34:02.330138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.839 [2024-10-14 17:34:02.330142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.839 [2024-10-14 17:34:02.330708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 [2024-10-14 17:34:02.465104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 [2024-10-14 17:34:02.485300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 NULL1 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.839 17:34:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:03.839 [2024-10-14 17:34:02.541825] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:03.839 [2024-10-14 17:34:02.541870] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063473 ] 00:17:03.839 Attached to nqn.2016-06.io.spdk:cnode1 00:17:03.839 Namespace ID: 1 size: 1GB 00:17:03.840 fused_ordering(0) 00:17:03.840 fused_ordering(1) 00:17:03.840 fused_ordering(2) 00:17:03.840 fused_ordering(3) 00:17:03.840 fused_ordering(4) 00:17:03.840 fused_ordering(5) 00:17:03.840 fused_ordering(6) 00:17:03.840 fused_ordering(7) 00:17:03.840 fused_ordering(8) 00:17:03.840 fused_ordering(9) 00:17:03.840 fused_ordering(10) 00:17:03.840 fused_ordering(11) 00:17:03.840 fused_ordering(12) 00:17:03.840 fused_ordering(13) 00:17:03.840 fused_ordering(14) 00:17:03.840 fused_ordering(15) 00:17:03.840 fused_ordering(16) 00:17:03.840 fused_ordering(17) 00:17:03.840 fused_ordering(18) 00:17:03.840 fused_ordering(19) 00:17:03.840 fused_ordering(20) 00:17:03.840 fused_ordering(21) 00:17:03.840 fused_ordering(22) 00:17:03.840 fused_ordering(23) 00:17:03.840 fused_ordering(24) 00:17:03.840 fused_ordering(25) 00:17:03.840 fused_ordering(26) 00:17:03.840 fused_ordering(27) 00:17:03.840 fused_ordering(28) 00:17:03.840 fused_ordering(29) 00:17:03.840 fused_ordering(30) 00:17:03.840 fused_ordering(31) 00:17:03.840 fused_ordering(32) 00:17:03.840 fused_ordering(33) 00:17:03.840 fused_ordering(34) 00:17:03.840 fused_ordering(35) 00:17:03.840 fused_ordering(36) 00:17:03.840 fused_ordering(37) 00:17:03.840 fused_ordering(38) 00:17:03.840 fused_ordering(39) 00:17:03.840 fused_ordering(40) 00:17:03.840 fused_ordering(41) 00:17:03.840 fused_ordering(42) 00:17:03.840 fused_ordering(43) 00:17:03.840 fused_ordering(44) 00:17:03.840 fused_ordering(45) 00:17:03.840 fused_ordering(46) 00:17:03.840 fused_ordering(47) 00:17:03.840 fused_ordering(48) 00:17:03.840 fused_ordering(49) 00:17:03.840 fused_ordering(50) 00:17:03.840 fused_ordering(51) 00:17:03.840 fused_ordering(52) 00:17:03.840 fused_ordering(53) 00:17:03.840 fused_ordering(54) 00:17:03.840 fused_ordering(55) 00:17:03.840 fused_ordering(56) 00:17:03.840 fused_ordering(57) 00:17:03.840 fused_ordering(58) 00:17:03.840 fused_ordering(59) 00:17:03.840 fused_ordering(60) 00:17:03.840 fused_ordering(61) 00:17:03.840 fused_ordering(62) 00:17:03.840 fused_ordering(63) 00:17:03.840 fused_ordering(64) 00:17:03.840 fused_ordering(65) 00:17:03.840 fused_ordering(66) 00:17:03.840 fused_ordering(67) 00:17:03.840 fused_ordering(68) 00:17:03.840 fused_ordering(69) 00:17:03.840 fused_ordering(70) 00:17:03.840 fused_ordering(71) 00:17:03.840 fused_ordering(72) 00:17:03.840 fused_ordering(73) 00:17:03.840 fused_ordering(74) 00:17:03.840 fused_ordering(75) 00:17:03.840 fused_ordering(76) 00:17:03.840 fused_ordering(77) 00:17:03.840 fused_ordering(78) 00:17:03.840 fused_ordering(79) 00:17:03.840 fused_ordering(80) 00:17:03.840 fused_ordering(81) 00:17:03.840 fused_ordering(82) 00:17:03.840 fused_ordering(83) 00:17:03.840 fused_ordering(84) 00:17:03.840 fused_ordering(85) 00:17:03.840 fused_ordering(86) 00:17:03.840 fused_ordering(87) 00:17:03.840 fused_ordering(88) 00:17:03.840 fused_ordering(89) 00:17:03.840 fused_ordering(90) 00:17:03.840 fused_ordering(91) 00:17:03.840 fused_ordering(92) 00:17:03.840 fused_ordering(93) 00:17:03.840 fused_ordering(94) 00:17:03.840 fused_ordering(95) 00:17:03.840 fused_ordering(96) 00:17:03.840 fused_ordering(97) 00:17:03.840 fused_ordering(98) 00:17:03.840 fused_ordering(99) 00:17:03.840 fused_ordering(100) 00:17:03.840 fused_ordering(101) 00:17:03.840 fused_ordering(102) 00:17:03.840 fused_ordering(103) 00:17:03.840 fused_ordering(104) 00:17:03.840 fused_ordering(105) 00:17:03.840 fused_ordering(106) 00:17:03.840 fused_ordering(107) 00:17:03.840 fused_ordering(108) 00:17:03.840 fused_ordering(109) 00:17:03.840 fused_ordering(110) 00:17:03.840 fused_ordering(111) 00:17:03.840 fused_ordering(112) 00:17:03.840 fused_ordering(113) 00:17:03.840 fused_ordering(114) 00:17:03.840 fused_ordering(115) 00:17:03.840 fused_ordering(116) 00:17:03.840 fused_ordering(117) 00:17:03.840 fused_ordering(118) 00:17:03.840 fused_ordering(119) 00:17:03.840 fused_ordering(120) 00:17:03.840 fused_ordering(121) 00:17:03.840 fused_ordering(122) 00:17:03.840 fused_ordering(123) 00:17:03.840 fused_ordering(124) 00:17:03.840 fused_ordering(125) 00:17:03.840 fused_ordering(126) 00:17:03.840 fused_ordering(127) 00:17:03.840 fused_ordering(128) 00:17:03.840 fused_ordering(129) 00:17:03.840 fused_ordering(130) 00:17:03.840 fused_ordering(131) 00:17:03.840 fused_ordering(132) 00:17:03.840 fused_ordering(133) 00:17:03.840 fused_ordering(134) 00:17:03.840 fused_ordering(135) 00:17:03.840 fused_ordering(136) 00:17:03.840 fused_ordering(137) 00:17:03.840 fused_ordering(138) 00:17:03.840 fused_ordering(139) 00:17:03.840 fused_ordering(140) 00:17:03.840 fused_ordering(141) 00:17:03.840 fused_ordering(142) 00:17:03.840 fused_ordering(143) 00:17:03.840 fused_ordering(144) 00:17:03.840 fused_ordering(145) 00:17:03.840 fused_ordering(146) 00:17:03.840 fused_ordering(147) 00:17:03.840 fused_ordering(148) 00:17:03.840 fused_ordering(149) 00:17:03.840 fused_ordering(150) 00:17:03.840 fused_ordering(151) 00:17:03.840 fused_ordering(152) 00:17:03.840 fused_ordering(153) 00:17:03.840 fused_ordering(154) 00:17:03.840 fused_ordering(155) 00:17:03.840 fused_ordering(156) 00:17:03.840 fused_ordering(157) 00:17:03.840 fused_ordering(158) 00:17:03.840 fused_ordering(159) 00:17:03.840 fused_ordering(160) 00:17:03.840 fused_ordering(161) 00:17:03.840 fused_ordering(162) 00:17:03.840 fused_ordering(163) 00:17:03.840 fused_ordering(164) 00:17:03.840 fused_ordering(165) 00:17:03.840 fused_ordering(166) 00:17:03.840 fused_ordering(167) 00:17:03.840 fused_ordering(168) 00:17:03.840 fused_ordering(169) 00:17:03.840 fused_ordering(170) 00:17:03.840 fused_ordering(171) 00:17:03.840 fused_ordering(172) 00:17:03.840 fused_ordering(173) 00:17:03.840 fused_ordering(174) 00:17:03.840 fused_ordering(175) 00:17:03.840 fused_ordering(176) 00:17:03.840 fused_ordering(177) 00:17:03.840 fused_ordering(178) 00:17:03.840 fused_ordering(179) 00:17:03.840 fused_ordering(180) 00:17:03.840 fused_ordering(181) 00:17:03.840 fused_ordering(182) 00:17:03.840 fused_ordering(183) 00:17:03.840 fused_ordering(184) 00:17:03.840 fused_ordering(185) 00:17:03.840 fused_ordering(186) 00:17:03.840 fused_ordering(187) 00:17:03.840 fused_ordering(188) 00:17:03.840 fused_ordering(189) 00:17:03.840 fused_ordering(190) 00:17:03.840 fused_ordering(191) 00:17:03.840 fused_ordering(192) 00:17:03.840 fused_ordering(193) 00:17:03.840 fused_ordering(194) 00:17:03.840 fused_ordering(195) 00:17:03.840 fused_ordering(196) 00:17:03.840 fused_ordering(197) 00:17:03.840 fused_ordering(198) 00:17:03.840 fused_ordering(199) 00:17:03.840 fused_ordering(200) 00:17:03.840 fused_ordering(201) 00:17:03.840 fused_ordering(202) 00:17:03.840 fused_ordering(203) 00:17:03.840 fused_ordering(204) 00:17:03.840 fused_ordering(205) 00:17:04.100 fused_ordering(206) 00:17:04.100 fused_ordering(207) 00:17:04.100 fused_ordering(208) 00:17:04.100 fused_ordering(209) 00:17:04.100 fused_ordering(210) 00:17:04.100 fused_ordering(211) 00:17:04.100 fused_ordering(212) 00:17:04.100 fused_ordering(213) 00:17:04.100 fused_ordering(214) 00:17:04.100 fused_ordering(215) 00:17:04.100 fused_ordering(216) 00:17:04.100 fused_ordering(217) 00:17:04.100 fused_ordering(218) 00:17:04.100 fused_ordering(219) 00:17:04.100 fused_ordering(220) 00:17:04.100 fused_ordering(221) 00:17:04.100 fused_ordering(222) 00:17:04.100 fused_ordering(223) 00:17:04.100 fused_ordering(224) 00:17:04.100 fused_ordering(225) 00:17:04.100 fused_ordering(226) 00:17:04.100 fused_ordering(227) 00:17:04.100 fused_ordering(228) 00:17:04.100 fused_ordering(229) 00:17:04.100 fused_ordering(230) 00:17:04.100 fused_ordering(231) 00:17:04.100 fused_ordering(232) 00:17:04.100 fused_ordering(233) 00:17:04.100 fused_ordering(234) 00:17:04.100 fused_ordering(235) 00:17:04.100 fused_ordering(236) 00:17:04.100 fused_ordering(237) 00:17:04.100 fused_ordering(238) 00:17:04.100 fused_ordering(239) 00:17:04.100 fused_ordering(240) 00:17:04.100 fused_ordering(241) 00:17:04.100 fused_ordering(242) 00:17:04.100 fused_ordering(243) 00:17:04.100 fused_ordering(244) 00:17:04.100 fused_ordering(245) 00:17:04.100 fused_ordering(246) 00:17:04.100 fused_ordering(247) 00:17:04.100 fused_ordering(248) 00:17:04.100 fused_ordering(249) 00:17:04.100 fused_ordering(250) 00:17:04.100 fused_ordering(251) 00:17:04.100 fused_ordering(252) 00:17:04.100 fused_ordering(253) 00:17:04.100 fused_ordering(254) 00:17:04.100 fused_ordering(255) 00:17:04.100 fused_ordering(256) 00:17:04.100 fused_ordering(257) 00:17:04.100 fused_ordering(258) 00:17:04.100 fused_ordering(259) 00:17:04.100 fused_ordering(260) 00:17:04.100 fused_ordering(261) 00:17:04.100 fused_ordering(262) 00:17:04.100 fused_ordering(263) 00:17:04.100 fused_ordering(264) 00:17:04.100 fused_ordering(265) 00:17:04.100 fused_ordering(266) 00:17:04.100 fused_ordering(267) 00:17:04.100 fused_ordering(268) 00:17:04.100 fused_ordering(269) 00:17:04.100 fused_ordering(270) 00:17:04.100 fused_ordering(271) 00:17:04.100 fused_ordering(272) 00:17:04.100 fused_ordering(273) 00:17:04.100 fused_ordering(274) 00:17:04.100 fused_ordering(275) 00:17:04.100 fused_ordering(276) 00:17:04.100 fused_ordering(277) 00:17:04.100 fused_ordering(278) 00:17:04.100 fused_ordering(279) 00:17:04.100 fused_ordering(280) 00:17:04.100 fused_ordering(281) 00:17:04.100 fused_ordering(282) 00:17:04.100 fused_ordering(283) 00:17:04.100 fused_ordering(284) 00:17:04.100 fused_ordering(285) 00:17:04.100 fused_ordering(286) 00:17:04.100 fused_ordering(287) 00:17:04.100 fused_ordering(288) 00:17:04.100 fused_ordering(289) 00:17:04.100 fused_ordering(290) 00:17:04.100 fused_ordering(291) 00:17:04.100 fused_ordering(292) 00:17:04.100 fused_ordering(293) 00:17:04.100 fused_ordering(294) 00:17:04.100 fused_ordering(295) 00:17:04.100 fused_ordering(296) 00:17:04.100 fused_ordering(297) 00:17:04.100 fused_ordering(298) 00:17:04.100 fused_ordering(299) 00:17:04.100 fused_ordering(300) 00:17:04.100 fused_ordering(301) 00:17:04.100 fused_ordering(302) 00:17:04.100 fused_ordering(303) 00:17:04.100 fused_ordering(304) 00:17:04.100 fused_ordering(305) 00:17:04.100 fused_ordering(306) 00:17:04.100 fused_ordering(307) 00:17:04.100 fused_ordering(308) 00:17:04.100 fused_ordering(309) 00:17:04.100 fused_ordering(310) 00:17:04.100 fused_ordering(311) 00:17:04.100 fused_ordering(312) 00:17:04.100 fused_ordering(313) 00:17:04.100 fused_ordering(314) 00:17:04.100 fused_ordering(315) 00:17:04.100 fused_ordering(316) 00:17:04.100 fused_ordering(317) 00:17:04.100 fused_ordering(318) 00:17:04.100 fused_ordering(319) 00:17:04.100 fused_ordering(320) 00:17:04.100 fused_ordering(321) 00:17:04.100 fused_ordering(322) 00:17:04.100 fused_ordering(323) 00:17:04.100 fused_ordering(324) 00:17:04.100 fused_ordering(325) 00:17:04.100 fused_ordering(326) 00:17:04.100 fused_ordering(327) 00:17:04.100 fused_ordering(328) 00:17:04.100 fused_ordering(329) 00:17:04.101 fused_ordering(330) 00:17:04.101 fused_ordering(331) 00:17:04.101 fused_ordering(332) 00:17:04.101 fused_ordering(333) 00:17:04.101 fused_ordering(334) 00:17:04.101 fused_ordering(335) 00:17:04.101 fused_ordering(336) 00:17:04.101 fused_ordering(337) 00:17:04.101 fused_ordering(338) 00:17:04.101 fused_ordering(339) 00:17:04.101 fused_ordering(340) 00:17:04.101 fused_ordering(341) 00:17:04.101 fused_ordering(342) 00:17:04.101 fused_ordering(343) 00:17:04.101 fused_ordering(344) 00:17:04.101 fused_ordering(345) 00:17:04.101 fused_ordering(346) 00:17:04.101 fused_ordering(347) 00:17:04.101 fused_ordering(348) 00:17:04.101 fused_ordering(349) 00:17:04.101 fused_ordering(350) 00:17:04.101 fused_ordering(351) 00:17:04.101 fused_ordering(352) 00:17:04.101 fused_ordering(353) 00:17:04.101 fused_ordering(354) 00:17:04.101 fused_ordering(355) 00:17:04.101 fused_ordering(356) 00:17:04.101 fused_ordering(357) 00:17:04.101 fused_ordering(358) 00:17:04.101 fused_ordering(359) 00:17:04.101 fused_ordering(360) 00:17:04.101 fused_ordering(361) 00:17:04.101 fused_ordering(362) 00:17:04.101 fused_ordering(363) 00:17:04.101 fused_ordering(364) 00:17:04.101 fused_ordering(365) 00:17:04.101 fused_ordering(366) 00:17:04.101 fused_ordering(367) 00:17:04.101 fused_ordering(368) 00:17:04.101 fused_ordering(369) 00:17:04.101 fused_ordering(370) 00:17:04.101 fused_ordering(371) 00:17:04.101 fused_ordering(372) 00:17:04.101 fused_ordering(373) 00:17:04.101 fused_ordering(374) 00:17:04.101 fused_ordering(375) 00:17:04.101 fused_ordering(376) 00:17:04.101 fused_ordering(377) 00:17:04.101 fused_ordering(378) 00:17:04.101 fused_ordering(379) 00:17:04.101 fused_ordering(380) 00:17:04.101 fused_ordering(381) 00:17:04.101 fused_ordering(382) 00:17:04.101 fused_ordering(383) 00:17:04.101 fused_ordering(384) 00:17:04.101 fused_ordering(385) 00:17:04.101 fused_ordering(386) 00:17:04.101 fused_ordering(387) 00:17:04.101 fused_ordering(388) 00:17:04.101 fused_ordering(389) 00:17:04.101 fused_ordering(390) 00:17:04.101 fused_ordering(391) 00:17:04.101 fused_ordering(392) 00:17:04.101 fused_ordering(393) 00:17:04.101 fused_ordering(394) 00:17:04.101 fused_ordering(395) 00:17:04.101 fused_ordering(396) 00:17:04.101 fused_ordering(397) 00:17:04.101 fused_ordering(398) 00:17:04.101 fused_ordering(399) 00:17:04.101 fused_ordering(400) 00:17:04.101 fused_ordering(401) 00:17:04.101 fused_ordering(402) 00:17:04.101 fused_ordering(403) 00:17:04.101 fused_ordering(404) 00:17:04.101 fused_ordering(405) 00:17:04.101 fused_ordering(406) 00:17:04.101 fused_ordering(407) 00:17:04.101 fused_ordering(408) 00:17:04.101 fused_ordering(409) 00:17:04.101 fused_ordering(410) 00:17:04.360 fused_ordering(411) 00:17:04.360 fused_ordering(412) 00:17:04.360 fused_ordering(413) 00:17:04.360 fused_ordering(414) 00:17:04.360 fused_ordering(415) 00:17:04.360 fused_ordering(416) 00:17:04.360 fused_ordering(417) 00:17:04.360 fused_ordering(418) 00:17:04.360 fused_ordering(419) 00:17:04.360 fused_ordering(420) 00:17:04.360 fused_ordering(421) 00:17:04.360 fused_ordering(422) 00:17:04.360 fused_ordering(423) 00:17:04.360 fused_ordering(424) 00:17:04.360 fused_ordering(425) 00:17:04.360 fused_ordering(426) 00:17:04.360 fused_ordering(427) 00:17:04.360 fused_ordering(428) 00:17:04.360 fused_ordering(429) 00:17:04.360 fused_ordering(430) 00:17:04.360 fused_ordering(431) 00:17:04.360 fused_ordering(432) 00:17:04.360 fused_ordering(433) 00:17:04.360 fused_ordering(434) 00:17:04.360 fused_ordering(435) 00:17:04.360 fused_ordering(436) 00:17:04.360 fused_ordering(437) 00:17:04.360 fused_ordering(438) 00:17:04.360 fused_ordering(439) 00:17:04.360 fused_ordering(440) 00:17:04.360 fused_ordering(441) 00:17:04.360 fused_ordering(442) 00:17:04.360 fused_ordering(443) 00:17:04.360 fused_ordering(444) 00:17:04.360 fused_ordering(445) 00:17:04.360 fused_ordering(446) 00:17:04.360 fused_ordering(447) 00:17:04.360 fused_ordering(448) 00:17:04.360 fused_ordering(449) 00:17:04.360 fused_ordering(450) 00:17:04.360 fused_ordering(451) 00:17:04.360 fused_ordering(452) 00:17:04.360 fused_ordering(453) 00:17:04.360 fused_ordering(454) 00:17:04.360 fused_ordering(455) 00:17:04.360 fused_ordering(456) 00:17:04.360 fused_ordering(457) 00:17:04.360 fused_ordering(458) 00:17:04.360 fused_ordering(459) 00:17:04.360 fused_ordering(460) 00:17:04.360 fused_ordering(461) 00:17:04.360 fused_ordering(462) 00:17:04.360 fused_ordering(463) 00:17:04.360 fused_ordering(464) 00:17:04.360 fused_ordering(465) 00:17:04.360 fused_ordering(466) 00:17:04.360 fused_ordering(467) 00:17:04.360 fused_ordering(468) 00:17:04.360 fused_ordering(469) 00:17:04.360 fused_ordering(470) 00:17:04.360 fused_ordering(471) 00:17:04.360 fused_ordering(472) 00:17:04.360 fused_ordering(473) 00:17:04.360 fused_ordering(474) 00:17:04.360 fused_ordering(475) 00:17:04.360 fused_ordering(476) 00:17:04.360 fused_ordering(477) 00:17:04.360 fused_ordering(478) 00:17:04.360 fused_ordering(479) 00:17:04.360 fused_ordering(480) 00:17:04.360 fused_ordering(481) 00:17:04.360 fused_ordering(482) 00:17:04.360 fused_ordering(483) 00:17:04.360 fused_ordering(484) 00:17:04.360 fused_ordering(485) 00:17:04.360 fused_ordering(486) 00:17:04.360 fused_ordering(487) 00:17:04.360 fused_ordering(488) 00:17:04.360 fused_ordering(489) 00:17:04.360 fused_ordering(490) 00:17:04.360 fused_ordering(491) 00:17:04.360 fused_ordering(492) 00:17:04.360 fused_ordering(493) 00:17:04.361 fused_ordering(494) 00:17:04.361 fused_ordering(495) 00:17:04.361 fused_ordering(496) 00:17:04.361 fused_ordering(497) 00:17:04.361 fused_ordering(498) 00:17:04.361 fused_ordering(499) 00:17:04.361 fused_ordering(500) 00:17:04.361 fused_ordering(501) 00:17:04.361 fused_ordering(502) 00:17:04.361 fused_ordering(503) 00:17:04.361 fused_ordering(504) 00:17:04.361 fused_ordering(505) 00:17:04.361 fused_ordering(506) 00:17:04.361 fused_ordering(507) 00:17:04.361 fused_ordering(508) 00:17:04.361 fused_ordering(509) 00:17:04.361 fused_ordering(510) 00:17:04.361 fused_ordering(511) 00:17:04.361 fused_ordering(512) 00:17:04.361 fused_ordering(513) 00:17:04.361 fused_ordering(514) 00:17:04.361 fused_ordering(515) 00:17:04.361 fused_ordering(516) 00:17:04.361 fused_ordering(517) 00:17:04.361 fused_ordering(518) 00:17:04.361 fused_ordering(519) 00:17:04.361 fused_ordering(520) 00:17:04.361 fused_ordering(521) 00:17:04.361 fused_ordering(522) 00:17:04.361 fused_ordering(523) 00:17:04.361 fused_ordering(524) 00:17:04.361 fused_ordering(525) 00:17:04.361 fused_ordering(526) 00:17:04.361 fused_ordering(527) 00:17:04.361 fused_ordering(528) 00:17:04.361 fused_ordering(529) 00:17:04.361 fused_ordering(530) 00:17:04.361 fused_ordering(531) 00:17:04.361 fused_ordering(532) 00:17:04.361 fused_ordering(533) 00:17:04.361 fused_ordering(534) 00:17:04.361 fused_ordering(535) 00:17:04.361 fused_ordering(536) 00:17:04.361 fused_ordering(537) 00:17:04.361 fused_ordering(538) 00:17:04.361 fused_ordering(539) 00:17:04.361 fused_ordering(540) 00:17:04.361 fused_ordering(541) 00:17:04.361 fused_ordering(542) 00:17:04.361 fused_ordering(543) 00:17:04.361 fused_ordering(544) 00:17:04.361 fused_ordering(545) 00:17:04.361 fused_ordering(546) 00:17:04.361 fused_ordering(547) 00:17:04.361 fused_ordering(548) 00:17:04.361 fused_ordering(549) 00:17:04.361 fused_ordering(550) 00:17:04.361 fused_ordering(551) 00:17:04.361 fused_ordering(552) 00:17:04.361 fused_ordering(553) 00:17:04.361 fused_ordering(554) 00:17:04.361 fused_ordering(555) 00:17:04.361 fused_ordering(556) 00:17:04.361 fused_ordering(557) 00:17:04.361 fused_ordering(558) 00:17:04.361 fused_ordering(559) 00:17:04.361 fused_ordering(560) 00:17:04.361 fused_ordering(561) 00:17:04.361 fused_ordering(562) 00:17:04.361 fused_ordering(563) 00:17:04.361 fused_ordering(564) 00:17:04.361 fused_ordering(565) 00:17:04.361 fused_ordering(566) 00:17:04.361 fused_ordering(567) 00:17:04.361 fused_ordering(568) 00:17:04.361 fused_ordering(569) 00:17:04.361 fused_ordering(570) 00:17:04.361 fused_ordering(571) 00:17:04.361 fused_ordering(572) 00:17:04.361 fused_ordering(573) 00:17:04.361 fused_ordering(574) 00:17:04.361 fused_ordering(575) 00:17:04.361 fused_ordering(576) 00:17:04.361 fused_ordering(577) 00:17:04.361 fused_ordering(578) 00:17:04.361 fused_ordering(579) 00:17:04.361 fused_ordering(580) 00:17:04.361 fused_ordering(581) 00:17:04.361 fused_ordering(582) 00:17:04.361 fused_ordering(583) 00:17:04.361 fused_ordering(584) 00:17:04.361 fused_ordering(585) 00:17:04.361 fused_ordering(586) 00:17:04.361 fused_ordering(587) 00:17:04.361 fused_ordering(588) 00:17:04.361 fused_ordering(589) 00:17:04.361 fused_ordering(590) 00:17:04.361 fused_ordering(591) 00:17:04.361 fused_ordering(592) 00:17:04.361 fused_ordering(593) 00:17:04.361 fused_ordering(594) 00:17:04.361 fused_ordering(595) 00:17:04.361 fused_ordering(596) 00:17:04.361 fused_ordering(597) 00:17:04.361 fused_ordering(598) 00:17:04.361 fused_ordering(599) 00:17:04.361 fused_ordering(600) 00:17:04.361 fused_ordering(601) 00:17:04.361 fused_ordering(602) 00:17:04.361 fused_ordering(603) 00:17:04.361 fused_ordering(604) 00:17:04.361 fused_ordering(605) 00:17:04.361 fused_ordering(606) 00:17:04.361 fused_ordering(607) 00:17:04.361 fused_ordering(608) 00:17:04.361 fused_ordering(609) 00:17:04.361 fused_ordering(610) 00:17:04.361 fused_ordering(611) 00:17:04.361 fused_ordering(612) 00:17:04.361 fused_ordering(613) 00:17:04.361 fused_ordering(614) 00:17:04.361 fused_ordering(615) 00:17:04.929 fused_ordering(616) 00:17:04.929 fused_ordering(617) 00:17:04.929 fused_ordering(618) 00:17:04.929 fused_ordering(619) 00:17:04.929 fused_ordering(620) 00:17:04.929 fused_ordering(621) 00:17:04.929 fused_ordering(622) 00:17:04.929 fused_ordering(623) 00:17:04.929 fused_ordering(624) 00:17:04.929 fused_ordering(625) 00:17:04.929 fused_ordering(626) 00:17:04.929 fused_ordering(627) 00:17:04.929 fused_ordering(628) 00:17:04.929 fused_ordering(629) 00:17:04.929 fused_ordering(630) 00:17:04.929 fused_ordering(631) 00:17:04.929 fused_ordering(632) 00:17:04.929 fused_ordering(633) 00:17:04.929 fused_ordering(634) 00:17:04.929 fused_ordering(635) 00:17:04.929 fused_ordering(636) 00:17:04.929 fused_ordering(637) 00:17:04.929 fused_ordering(638) 00:17:04.929 fused_ordering(639) 00:17:04.929 fused_ordering(640) 00:17:04.929 fused_ordering(641) 00:17:04.929 fused_ordering(642) 00:17:04.929 fused_ordering(643) 00:17:04.929 fused_ordering(644) 00:17:04.929 fused_ordering(645) 00:17:04.929 fused_ordering(646) 00:17:04.929 fused_ordering(647) 00:17:04.929 fused_ordering(648) 00:17:04.929 fused_ordering(649) 00:17:04.929 fused_ordering(650) 00:17:04.929 fused_ordering(651) 00:17:04.929 fused_ordering(652) 00:17:04.929 fused_ordering(653) 00:17:04.929 fused_ordering(654) 00:17:04.929 fused_ordering(655) 00:17:04.929 fused_ordering(656) 00:17:04.929 fused_ordering(657) 00:17:04.929 fused_ordering(658) 00:17:04.929 fused_ordering(659) 00:17:04.929 fused_ordering(660) 00:17:04.929 fused_ordering(661) 00:17:04.929 fused_ordering(662) 00:17:04.929 fused_ordering(663) 00:17:04.929 fused_ordering(664) 00:17:04.929 fused_ordering(665) 00:17:04.929 fused_ordering(666) 00:17:04.929 fused_ordering(667) 00:17:04.929 fused_ordering(668) 00:17:04.929 fused_ordering(669) 00:17:04.929 fused_ordering(670) 00:17:04.929 fused_ordering(671) 00:17:04.929 fused_ordering(672) 00:17:04.929 fused_ordering(673) 00:17:04.929 fused_ordering(674) 00:17:04.929 fused_ordering(675) 00:17:04.929 fused_ordering(676) 00:17:04.929 fused_ordering(677) 00:17:04.929 fused_ordering(678) 00:17:04.929 fused_ordering(679) 00:17:04.929 fused_ordering(680) 00:17:04.929 fused_ordering(681) 00:17:04.929 fused_ordering(682) 00:17:04.929 fused_ordering(683) 00:17:04.929 fused_ordering(684) 00:17:04.929 fused_ordering(685) 00:17:04.929 fused_ordering(686) 00:17:04.929 fused_ordering(687) 00:17:04.929 fused_ordering(688) 00:17:04.929 fused_ordering(689) 00:17:04.929 fused_ordering(690) 00:17:04.929 fused_ordering(691) 00:17:04.929 fused_ordering(692) 00:17:04.929 fused_ordering(693) 00:17:04.929 fused_ordering(694) 00:17:04.929 fused_ordering(695) 00:17:04.929 fused_ordering(696) 00:17:04.929 fused_ordering(697) 00:17:04.929 fused_ordering(698) 00:17:04.929 fused_ordering(699) 00:17:04.929 fused_ordering(700) 00:17:04.929 fused_ordering(701) 00:17:04.929 fused_ordering(702) 00:17:04.929 fused_ordering(703) 00:17:04.929 fused_ordering(704) 00:17:04.929 fused_ordering(705) 00:17:04.929 fused_ordering(706) 00:17:04.929 fused_ordering(707) 00:17:04.929 fused_ordering(708) 00:17:04.929 fused_ordering(709) 00:17:04.929 fused_ordering(710) 00:17:04.929 fused_ordering(711) 00:17:04.929 fused_ordering(712) 00:17:04.929 fused_ordering(713) 00:17:04.929 fused_ordering(714) 00:17:04.929 fused_ordering(715) 00:17:04.929 fused_ordering(716) 00:17:04.929 fused_ordering(717) 00:17:04.929 fused_ordering(718) 00:17:04.929 fused_ordering(719) 00:17:04.929 fused_ordering(720) 00:17:04.929 fused_ordering(721) 00:17:04.929 fused_ordering(722) 00:17:04.929 fused_ordering(723) 00:17:04.929 fused_ordering(724) 00:17:04.929 fused_ordering(725) 00:17:04.929 fused_ordering(726) 00:17:04.929 fused_ordering(727) 00:17:04.929 fused_ordering(728) 00:17:04.929 fused_ordering(729) 00:17:04.929 fused_ordering(730) 00:17:04.929 fused_ordering(731) 00:17:04.929 fused_ordering(732) 00:17:04.929 fused_ordering(733) 00:17:04.929 fused_ordering(734) 00:17:04.929 fused_ordering(735) 00:17:04.929 fused_ordering(736) 00:17:04.929 fused_ordering(737) 00:17:04.929 fused_ordering(738) 00:17:04.929 fused_ordering(739) 00:17:04.929 fused_ordering(740) 00:17:04.929 fused_ordering(741) 00:17:04.929 fused_ordering(742) 00:17:04.929 fused_ordering(743) 00:17:04.929 fused_ordering(744) 00:17:04.929 fused_ordering(745) 00:17:04.929 fused_ordering(746) 00:17:04.929 fused_ordering(747) 00:17:04.929 fused_ordering(748) 00:17:04.929 fused_ordering(749) 00:17:04.929 fused_ordering(750) 00:17:04.929 fused_ordering(751) 00:17:04.929 fused_ordering(752) 00:17:04.929 fused_ordering(753) 00:17:04.929 fused_ordering(754) 00:17:04.929 fused_ordering(755) 00:17:04.929 fused_ordering(756) 00:17:04.929 fused_ordering(757) 00:17:04.929 fused_ordering(758) 00:17:04.929 fused_ordering(759) 00:17:04.929 fused_ordering(760) 00:17:04.929 fused_ordering(761) 00:17:04.929 fused_ordering(762) 00:17:04.929 fused_ordering(763) 00:17:04.929 fused_ordering(764) 00:17:04.929 fused_ordering(765) 00:17:04.929 fused_ordering(766) 00:17:04.929 fused_ordering(767) 00:17:04.929 fused_ordering(768) 00:17:04.929 fused_ordering(769) 00:17:04.929 fused_ordering(770) 00:17:04.929 fused_ordering(771) 00:17:04.929 fused_ordering(772) 00:17:04.929 fused_ordering(773) 00:17:04.929 fused_ordering(774) 00:17:04.929 fused_ordering(775) 00:17:04.929 fused_ordering(776) 00:17:04.929 fused_ordering(777) 00:17:04.929 fused_ordering(778) 00:17:04.929 fused_ordering(779) 00:17:04.929 fused_ordering(780) 00:17:04.929 fused_ordering(781) 00:17:04.929 fused_ordering(782) 00:17:04.929 fused_ordering(783) 00:17:04.929 fused_ordering(784) 00:17:04.929 fused_ordering(785) 00:17:04.929 fused_ordering(786) 00:17:04.929 fused_ordering(787) 00:17:04.929 fused_ordering(788) 00:17:04.929 fused_ordering(789) 00:17:04.929 fused_ordering(790) 00:17:04.929 fused_ordering(791) 00:17:04.929 fused_ordering(792) 00:17:04.929 fused_ordering(793) 00:17:04.929 fused_ordering(794) 00:17:04.929 fused_ordering(795) 00:17:04.929 fused_ordering(796) 00:17:04.929 fused_ordering(797) 00:17:04.929 fused_ordering(798) 00:17:04.929 fused_ordering(799) 00:17:04.929 fused_ordering(800) 00:17:04.929 fused_ordering(801) 00:17:04.929 fused_ordering(802) 00:17:04.929 fused_ordering(803) 00:17:04.929 fused_ordering(804) 00:17:04.929 fused_ordering(805) 00:17:04.929 fused_ordering(806) 00:17:04.929 fused_ordering(807) 00:17:04.929 fused_ordering(808) 00:17:04.929 fused_ordering(809) 00:17:04.929 fused_ordering(810) 00:17:04.929 fused_ordering(811) 00:17:04.929 fused_ordering(812) 00:17:04.929 fused_ordering(813) 00:17:04.929 fused_ordering(814) 00:17:04.929 fused_ordering(815) 00:17:04.929 fused_ordering(816) 00:17:04.929 fused_ordering(817) 00:17:04.929 fused_ordering(818) 00:17:04.929 fused_ordering(819) 00:17:04.929 fused_ordering(820) 00:17:05.193 fused_ordering(821) 00:17:05.193 fused_ordering(822) 00:17:05.193 fused_ordering(823) 00:17:05.193 fused_ordering(824) 00:17:05.193 fused_ordering(825) 00:17:05.193 fused_ordering(826) 00:17:05.193 fused_ordering(827) 00:17:05.193 fused_ordering(828) 00:17:05.193 fused_ordering(829) 00:17:05.193 fused_ordering(830) 00:17:05.193 fused_ordering(831) 00:17:05.193 fused_ordering(832) 00:17:05.193 fused_ordering(833) 00:17:05.193 fused_ordering(834) 00:17:05.193 fused_ordering(835) 00:17:05.193 fused_ordering(836) 00:17:05.193 fused_ordering(837) 00:17:05.193 fused_ordering(838) 00:17:05.193 fused_ordering(839) 00:17:05.193 fused_ordering(840) 00:17:05.193 fused_ordering(841) 00:17:05.193 fused_ordering(842) 00:17:05.193 fused_ordering(843) 00:17:05.193 fused_ordering(844) 00:17:05.193 fused_ordering(845) 00:17:05.193 fused_ordering(846) 00:17:05.193 fused_ordering(847) 00:17:05.193 fused_ordering(848) 00:17:05.193 fused_ordering(849) 00:17:05.193 fused_ordering(850) 00:17:05.193 fused_ordering(851) 00:17:05.193 fused_ordering(852) 00:17:05.193 fused_ordering(853) 00:17:05.193 fused_ordering(854) 00:17:05.193 fused_ordering(855) 00:17:05.193 fused_ordering(856) 00:17:05.193 fused_ordering(857) 00:17:05.193 fused_ordering(858) 00:17:05.193 fused_ordering(859) 00:17:05.193 fused_ordering(860) 00:17:05.193 fused_ordering(861) 00:17:05.193 fused_ordering(862) 00:17:05.193 fused_ordering(863) 00:17:05.193 fused_ordering(864) 00:17:05.193 fused_ordering(865) 00:17:05.193 fused_ordering(866) 00:17:05.193 fused_ordering(867) 00:17:05.193 fused_ordering(868) 00:17:05.193 fused_ordering(869) 00:17:05.193 fused_ordering(870) 00:17:05.193 fused_ordering(871) 00:17:05.193 fused_ordering(872) 00:17:05.193 fused_ordering(873) 00:17:05.193 fused_ordering(874) 00:17:05.193 fused_ordering(875) 00:17:05.193 fused_ordering(876) 00:17:05.193 fused_ordering(877) 00:17:05.193 fused_ordering(878) 00:17:05.193 fused_ordering(879) 00:17:05.193 fused_ordering(880) 00:17:05.193 fused_ordering(881) 00:17:05.193 fused_ordering(882) 00:17:05.193 fused_ordering(883) 00:17:05.193 fused_ordering(884) 00:17:05.193 fused_ordering(885) 00:17:05.193 fused_ordering(886) 00:17:05.193 fused_ordering(887) 00:17:05.193 fused_ordering(888) 00:17:05.193 fused_ordering(889) 00:17:05.193 fused_ordering(890) 00:17:05.193 fused_ordering(891) 00:17:05.193 fused_ordering(892) 00:17:05.193 fused_ordering(893) 00:17:05.193 fused_ordering(894) 00:17:05.193 fused_ordering(895) 00:17:05.193 fused_ordering(896) 00:17:05.193 fused_ordering(897) 00:17:05.193 fused_ordering(898) 00:17:05.193 fused_ordering(899) 00:17:05.193 fused_ordering(900) 00:17:05.193 fused_ordering(901) 00:17:05.193 fused_ordering(902) 00:17:05.193 fused_ordering(903) 00:17:05.193 fused_ordering(904) 00:17:05.193 fused_ordering(905) 00:17:05.193 fused_ordering(906) 00:17:05.193 fused_ordering(907) 00:17:05.193 fused_ordering(908) 00:17:05.193 fused_ordering(909) 00:17:05.193 fused_ordering(910) 00:17:05.193 fused_ordering(911) 00:17:05.193 fused_ordering(912) 00:17:05.193 fused_ordering(913) 00:17:05.193 fused_ordering(914) 00:17:05.193 fused_ordering(915) 00:17:05.193 fused_ordering(916) 00:17:05.193 fused_ordering(917) 00:17:05.193 fused_ordering(918) 00:17:05.193 fused_ordering(919) 00:17:05.193 fused_ordering(920) 00:17:05.193 fused_ordering(921) 00:17:05.193 fused_ordering(922) 00:17:05.193 fused_ordering(923) 00:17:05.193 fused_ordering(924) 00:17:05.193 fused_ordering(925) 00:17:05.193 fused_ordering(926) 00:17:05.193 fused_ordering(927) 00:17:05.193 fused_ordering(928) 00:17:05.193 fused_ordering(929) 00:17:05.193 fused_ordering(930) 00:17:05.193 fused_ordering(931) 00:17:05.193 fused_ordering(932) 00:17:05.193 fused_ordering(933) 00:17:05.193 fused_ordering(934) 00:17:05.193 fused_ordering(935) 00:17:05.193 fused_ordering(936) 00:17:05.193 fused_ordering(937) 00:17:05.193 fused_ordering(938) 00:17:05.193 fused_ordering(939) 00:17:05.193 fused_ordering(940) 00:17:05.193 fused_ordering(941) 00:17:05.193 fused_ordering(942) 00:17:05.193 fused_ordering(943) 00:17:05.193 fused_ordering(944) 00:17:05.193 fused_ordering(945) 00:17:05.193 fused_ordering(946) 00:17:05.193 fused_ordering(947) 00:17:05.193 fused_ordering(948) 00:17:05.193 fused_ordering(949) 00:17:05.193 fused_ordering(950) 00:17:05.193 fused_ordering(951) 00:17:05.193 fused_ordering(952) 00:17:05.193 fused_ordering(953) 00:17:05.193 fused_ordering(954) 00:17:05.193 fused_ordering(955) 00:17:05.193 fused_ordering(956) 00:17:05.193 fused_ordering(957) 00:17:05.193 fused_ordering(958) 00:17:05.193 fused_ordering(959) 00:17:05.193 fused_ordering(960) 00:17:05.193 fused_ordering(961) 00:17:05.193 fused_ordering(962) 00:17:05.193 fused_ordering(963) 00:17:05.194 fused_ordering(964) 00:17:05.194 fused_ordering(965) 00:17:05.194 fused_ordering(966) 00:17:05.194 fused_ordering(967) 00:17:05.194 fused_ordering(968) 00:17:05.194 fused_ordering(969) 00:17:05.194 fused_ordering(970) 00:17:05.194 fused_ordering(971) 00:17:05.194 fused_ordering(972) 00:17:05.194 fused_ordering(973) 00:17:05.194 fused_ordering(974) 00:17:05.194 fused_ordering(975) 00:17:05.194 fused_ordering(976) 00:17:05.194 fused_ordering(977) 00:17:05.194 fused_ordering(978) 00:17:05.194 fused_ordering(979) 00:17:05.194 fused_ordering(980) 00:17:05.194 fused_ordering(981) 00:17:05.194 fused_ordering(982) 00:17:05.194 fused_ordering(983) 00:17:05.194 fused_ordering(984) 00:17:05.194 fused_ordering(985) 00:17:05.194 fused_ordering(986) 00:17:05.194 fused_ordering(987) 00:17:05.194 fused_ordering(988) 00:17:05.194 fused_ordering(989) 00:17:05.194 fused_ordering(990) 00:17:05.194 fused_ordering(991) 00:17:05.194 fused_ordering(992) 00:17:05.194 fused_ordering(993) 00:17:05.194 fused_ordering(994) 00:17:05.194 fused_ordering(995) 00:17:05.194 fused_ordering(996) 00:17:05.194 fused_ordering(997) 00:17:05.194 fused_ordering(998) 00:17:05.194 fused_ordering(999) 00:17:05.194 fused_ordering(1000) 00:17:05.194 fused_ordering(1001) 00:17:05.194 fused_ordering(1002) 00:17:05.194 fused_ordering(1003) 00:17:05.194 fused_ordering(1004) 00:17:05.194 fused_ordering(1005) 00:17:05.194 fused_ordering(1006) 00:17:05.194 fused_ordering(1007) 00:17:05.194 fused_ordering(1008) 00:17:05.194 fused_ordering(1009) 00:17:05.194 fused_ordering(1010) 00:17:05.194 fused_ordering(1011) 00:17:05.194 fused_ordering(1012) 00:17:05.194 fused_ordering(1013) 00:17:05.194 fused_ordering(1014) 00:17:05.194 fused_ordering(1015) 00:17:05.194 fused_ordering(1016) 00:17:05.194 fused_ordering(1017) 00:17:05.194 fused_ordering(1018) 00:17:05.194 fused_ordering(1019) 00:17:05.194 fused_ordering(1020) 00:17:05.194 fused_ordering(1021) 00:17:05.194 fused_ordering(1022) 00:17:05.194 fused_ordering(1023) 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.194 rmmod nvme_tcp 00:17:05.194 rmmod nvme_fabrics 00:17:05.194 rmmod nvme_keyring 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1063443 ']' 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1063443 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1063443 ']' 00:17:05.194 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1063443 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1063443 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1063443' 00:17:05.453 killing process with pid 1063443 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1063443 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1063443 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.453 17:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.474 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.474 00:17:07.474 real 0m10.648s 00:17:07.474 user 0m4.892s 00:17:07.474 sys 0m5.835s 00:17:07.474 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.474 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:07.474 ************************************ 00:17:07.474 END TEST nvmf_fused_ordering 00:17:07.474 ************************************ 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.734 ************************************ 00:17:07.734 START TEST nvmf_ns_masking 00:17:07.734 ************************************ 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:07.734 * Looking for test storage... 00:17:07.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:07.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.734 --rc genhtml_branch_coverage=1 00:17:07.734 --rc genhtml_function_coverage=1 00:17:07.734 --rc genhtml_legend=1 00:17:07.734 --rc geninfo_all_blocks=1 00:17:07.734 --rc geninfo_unexecuted_blocks=1 00:17:07.734 00:17:07.734 ' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:07.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.734 --rc genhtml_branch_coverage=1 00:17:07.734 --rc genhtml_function_coverage=1 00:17:07.734 --rc genhtml_legend=1 00:17:07.734 --rc geninfo_all_blocks=1 00:17:07.734 --rc geninfo_unexecuted_blocks=1 00:17:07.734 00:17:07.734 ' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:07.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.734 --rc genhtml_branch_coverage=1 00:17:07.734 --rc genhtml_function_coverage=1 00:17:07.734 --rc genhtml_legend=1 00:17:07.734 --rc geninfo_all_blocks=1 00:17:07.734 --rc geninfo_unexecuted_blocks=1 00:17:07.734 00:17:07.734 ' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:07.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.734 --rc genhtml_branch_coverage=1 00:17:07.734 --rc genhtml_function_coverage=1 00:17:07.734 --rc genhtml_legend=1 00:17:07.734 --rc geninfo_all_blocks=1 00:17:07.734 --rc geninfo_unexecuted_blocks=1 00:17:07.734 00:17:07.734 ' 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.734 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.735 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.735 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.994 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=02c30e56-40e3-41de-ac88-7694e60b4d00 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2591e145-0c95-414b-bf37-2c3f9f43f4ca 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c7b1db90-050a-438c-9fed-2c718e19eb6d 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.995 17:34:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:14.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:14.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:14.567 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:14.568 Found net devices under 0000:86:00.0: cvl_0_0 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:14.568 Found net devices under 0000:86:00.1: cvl_0_1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:17:14.568 00:17:14.568 --- 10.0.0.2 ping statistics --- 00:17:14.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.568 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:14.568 00:17:14.568 --- 10.0.0.1 ping statistics --- 00:17:14.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.568 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1067389 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1067389 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1067389 ']' 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.568 17:34:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.568 [2024-10-14 17:34:12.952266] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:14.568 [2024-10-14 17:34:12.952307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.568 [2024-10-14 17:34:13.020951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.568 [2024-10-14 17:34:13.061917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.568 [2024-10-14 17:34:13.061947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.568 [2024-10-14 17:34:13.061956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.568 [2024-10-14 17:34:13.061962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.568 [2024-10-14 17:34:13.061970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.568 [2024-10-14 17:34:13.062520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:14.568 [2024-10-14 17:34:13.366469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:14.568 Malloc1 00:17:14.568 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:14.827 Malloc2 00:17:14.827 17:34:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:15.086 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:15.086 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.345 [2024-10-14 17:34:14.394411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.345 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:15.345 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7b1db90-050a-438c-9fed-2c718e19eb6d -a 10.0.0.2 -s 4420 -i 4 00:17:15.604 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.604 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.604 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.604 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.604 17:34:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.506 [ 0]:0x1 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.506 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64bfef1c25b84b54978a91495cf700e1 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64bfef1c25b84b54978a91495cf700e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.766 [ 0]:0x1 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.766 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64bfef1c25b84b54978a91495cf700e1 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64bfef1c25b84b54978a91495cf700e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:18.025 [ 1]:0x2 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:18.025 17:34:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.025 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.284 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7b1db90-050a-438c-9fed-2c718e19eb6d -a 10.0.0.2 -s 4420 -i 4 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:18.543 17:34:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:21.076 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.077 [ 0]:0x2 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.077 17:34:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.077 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:21.077 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.077 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.077 [ 0]:0x1 00:17:21.077 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.077 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64bfef1c25b84b54978a91495cf700e1 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64bfef1c25b84b54978a91495cf700e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.336 [ 1]:0x2 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.336 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:21.596 [ 0]:0x2 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.596 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:21.856 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:21.856 17:34:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7b1db90-050a-438c-9fed-2c718e19eb6d -a 10.0.0.2 -s 4420 -i 4 00:17:22.114 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:22.114 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:22.114 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.114 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:22.114 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:22.115 17:34:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.022 [ 0]:0x1 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64bfef1c25b84b54978a91495cf700e1 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64bfef1c25b84b54978a91495cf700e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.022 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.282 [ 1]:0x2 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.282 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.541 [ 0]:0x2 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:24.541 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:24.801 [2024-10-14 17:34:23.785282] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:24.801 request: 00:17:24.801 { 00:17:24.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.801 "nsid": 2, 00:17:24.801 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.801 "method": "nvmf_ns_remove_host", 00:17:24.801 "req_id": 1 00:17:24.801 } 00:17:24.801 Got JSON-RPC error response 00:17:24.801 response: 00:17:24.801 { 00:17:24.801 "code": -32602, 00:17:24.801 "message": "Invalid parameters" 00:17:24.801 } 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:24.801 [ 0]:0x2 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:24.801 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:25.060 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4750050a1a6649aca98de27d812bd623 00:17:25.060 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4750050a1a6649aca98de27d812bd623 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:25.060 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:25.060 17:34:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1069346 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1069346 /var/tmp/host.sock 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1069346 ']' 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:25.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.060 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:25.060 [2024-10-14 17:34:24.160231] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:25.060 [2024-10-14 17:34:24.160280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069346 ] 00:17:25.319 [2024-10-14 17:34:24.229239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.319 [2024-10-14 17:34:24.269591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.578 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.578 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:25.578 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.578 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:25.837 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 02c30e56-40e3-41de-ac88-7694e60b4d00 00:17:25.837 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:25.837 17:34:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 02C30E5640E341DEAC887694E60B4D00 -i 00:17:26.096 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2591e145-0c95-414b-bf37-2c3f9f43f4ca 00:17:26.096 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:26.096 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2591E1450C95414BBF372C3F9F43F4CA -i 00:17:26.355 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:26.355 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:26.616 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:26.617 17:34:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:27.185 nvme0n1 00:17:27.185 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.185 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.445 nvme1n2 00:17:27.445 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:27.445 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:27.445 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:27.445 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:27.445 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 02c30e56-40e3-41de-ac88-7694e60b4d00 == \0\2\c\3\0\e\5\6\-\4\0\e\3\-\4\1\d\e\-\a\c\8\8\-\7\6\9\4\e\6\0\b\4\d\0\0 ]] 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:27.703 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:27.704 17:34:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2591e145-0c95-414b-bf37-2c3f9f43f4ca == \2\5\9\1\e\1\4\5\-\0\c\9\5\-\4\1\4\b\-\b\f\3\7\-\2\c\3\f\9\f\4\3\f\4\c\a ]] 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1069346 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1069346 ']' 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1069346 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.962 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1069346 00:17:28.221 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:28.221 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:28.221 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1069346' 00:17:28.221 killing process with pid 1069346 00:17:28.221 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1069346 00:17:28.221 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1069346 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.480 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.480 rmmod nvme_tcp 00:17:28.739 rmmod nvme_fabrics 00:17:28.739 rmmod nvme_keyring 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1067389 ']' 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1067389 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1067389 ']' 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1067389 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1067389 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1067389' 00:17:28.739 killing process with pid 1067389 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1067389 00:17:28.739 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1067389 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.997 17:34:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.902 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.902 00:17:30.902 real 0m23.307s 00:17:30.902 user 0m24.795s 00:17:30.902 sys 0m6.797s 00:17:30.902 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.902 17:34:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:30.902 ************************************ 00:17:30.902 END TEST nvmf_ns_masking 00:17:30.902 ************************************ 00:17:30.902 17:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:30.902 17:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:30.902 17:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:30.902 17:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.902 17:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.162 ************************************ 00:17:31.162 START TEST nvmf_nvme_cli 00:17:31.162 ************************************ 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:31.162 * Looking for test storage... 00:17:31.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.162 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:31.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.163 --rc genhtml_branch_coverage=1 00:17:31.163 --rc genhtml_function_coverage=1 00:17:31.163 --rc genhtml_legend=1 00:17:31.163 --rc geninfo_all_blocks=1 00:17:31.163 --rc geninfo_unexecuted_blocks=1 00:17:31.163 00:17:31.163 ' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:31.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.163 --rc genhtml_branch_coverage=1 00:17:31.163 --rc genhtml_function_coverage=1 00:17:31.163 --rc genhtml_legend=1 00:17:31.163 --rc geninfo_all_blocks=1 00:17:31.163 --rc geninfo_unexecuted_blocks=1 00:17:31.163 00:17:31.163 ' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:31.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.163 --rc genhtml_branch_coverage=1 00:17:31.163 --rc genhtml_function_coverage=1 00:17:31.163 --rc genhtml_legend=1 00:17:31.163 --rc geninfo_all_blocks=1 00:17:31.163 --rc geninfo_unexecuted_blocks=1 00:17:31.163 00:17:31.163 ' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:31.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.163 --rc genhtml_branch_coverage=1 00:17:31.163 --rc genhtml_function_coverage=1 00:17:31.163 --rc genhtml_legend=1 00:17:31.163 --rc geninfo_all_blocks=1 00:17:31.163 --rc geninfo_unexecuted_blocks=1 00:17:31.163 00:17:31.163 ' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.163 17:34:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:37.737 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:37.737 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:37.737 Found net devices under 0000:86:00.0: cvl_0_0 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:37.737 Found net devices under 0000:86:00.1: cvl_0_1 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.737 17:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.737 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.737 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.737 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:37.737 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.737 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:37.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:17:37.738 00:17:37.738 --- 10.0.0.2 ping statistics --- 00:17:37.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.738 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:37.738 00:17:37.738 --- 10.0.0.1 ping statistics --- 00:17:37.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.738 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1073496 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1073496 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1073496 ']' 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 [2024-10-14 17:34:36.287275] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:37.738 [2024-10-14 17:34:36.287318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.738 [2024-10-14 17:34:36.360170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.738 [2024-10-14 17:34:36.403283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.738 [2024-10-14 17:34:36.403319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.738 [2024-10-14 17:34:36.403326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.738 [2024-10-14 17:34:36.403332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.738 [2024-10-14 17:34:36.403337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.738 [2024-10-14 17:34:36.404870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.738 [2024-10-14 17:34:36.404983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.738 [2024-10-14 17:34:36.405089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.738 [2024-10-14 17:34:36.405090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 [2024-10-14 17:34:36.541004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 Malloc0 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 Malloc1 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 [2024-10-14 17:34:36.633726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:37.738 00:17:37.738 Discovery Log Number of Records 2, Generation counter 2 00:17:37.738 =====Discovery Log Entry 0====== 00:17:37.738 trtype: tcp 00:17:37.738 adrfam: ipv4 00:17:37.738 subtype: current discovery subsystem 00:17:37.738 treq: not required 00:17:37.738 portid: 0 00:17:37.738 trsvcid: 4420 00:17:37.738 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:37.738 traddr: 10.0.0.2 00:17:37.738 eflags: explicit discovery connections, duplicate discovery information 00:17:37.738 sectype: none 00:17:37.738 =====Discovery Log Entry 1====== 00:17:37.738 trtype: tcp 00:17:37.738 adrfam: ipv4 00:17:37.738 subtype: nvme subsystem 00:17:37.738 treq: not required 00:17:37.738 portid: 0 00:17:37.738 trsvcid: 4420 00:17:37.738 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:37.738 traddr: 10.0.0.2 00:17:37.738 eflags: none 00:17:37.738 sectype: none 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:37.738 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:37.739 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:37.739 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:37.739 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:37.739 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:37.739 17:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:39.117 17:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:41.023 17:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:41.023 /dev/nvme0n2 ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:41.023 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.283 rmmod nvme_tcp 00:17:41.283 rmmod nvme_fabrics 00:17:41.283 rmmod nvme_keyring 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1073496 ']' 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1073496 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1073496 ']' 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1073496 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1073496 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1073496' 00:17:41.283 killing process with pid 1073496 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1073496 00:17:41.283 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1073496 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.543 17:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.078 00:17:44.078 real 0m12.534s 00:17:44.078 user 0m17.987s 00:17:44.078 sys 0m5.162s 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:44.078 ************************************ 00:17:44.078 END TEST nvmf_nvme_cli 00:17:44.078 ************************************ 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.078 ************************************ 00:17:44.078 START TEST nvmf_vfio_user 00:17:44.078 ************************************ 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:44.078 * Looking for test storage... 00:17:44.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.078 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:44.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.079 --rc genhtml_branch_coverage=1 00:17:44.079 --rc genhtml_function_coverage=1 00:17:44.079 --rc genhtml_legend=1 00:17:44.079 --rc geninfo_all_blocks=1 00:17:44.079 --rc geninfo_unexecuted_blocks=1 00:17:44.079 00:17:44.079 ' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:44.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.079 --rc genhtml_branch_coverage=1 00:17:44.079 --rc genhtml_function_coverage=1 00:17:44.079 --rc genhtml_legend=1 00:17:44.079 --rc geninfo_all_blocks=1 00:17:44.079 --rc geninfo_unexecuted_blocks=1 00:17:44.079 00:17:44.079 ' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:44.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.079 --rc genhtml_branch_coverage=1 00:17:44.079 --rc genhtml_function_coverage=1 00:17:44.079 --rc genhtml_legend=1 00:17:44.079 --rc geninfo_all_blocks=1 00:17:44.079 --rc geninfo_unexecuted_blocks=1 00:17:44.079 00:17:44.079 ' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:44.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.079 --rc genhtml_branch_coverage=1 00:17:44.079 --rc genhtml_function_coverage=1 00:17:44.079 --rc genhtml_legend=1 00:17:44.079 --rc geninfo_all_blocks=1 00:17:44.079 --rc geninfo_unexecuted_blocks=1 00:17:44.079 00:17:44.079 ' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1074785 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1074785' 00:17:44.079 Process pid: 1074785 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1074785 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1074785 ']' 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.079 17:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:44.079 [2024-10-14 17:34:42.927559] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:44.079 [2024-10-14 17:34:42.927617] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.080 [2024-10-14 17:34:42.996221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.080 [2024-10-14 17:34:43.035742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.080 [2024-10-14 17:34:43.035779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.080 [2024-10-14 17:34:43.035787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.080 [2024-10-14 17:34:43.035793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.080 [2024-10-14 17:34:43.035799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.080 [2024-10-14 17:34:43.037367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.080 [2024-10-14 17:34:43.037474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.080 [2024-10-14 17:34:43.037583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.080 [2024-10-14 17:34:43.037583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.080 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.080 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:44.080 17:34:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:45.017 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:45.275 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:45.275 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:45.275 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:45.275 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:45.275 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.533 Malloc1 00:17:45.533 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:45.792 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:46.051 17:34:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:46.051 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.051 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:46.051 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:46.309 Malloc2 00:17:46.309 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:46.568 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:46.826 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:47.087 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:47.087 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:47.087 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.087 17:34:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:47.087 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:47.087 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.087 [2024-10-14 17:34:46.029159] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:17:47.087 [2024-10-14 17:34:46.029205] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075270 ] 00:17:47.087 [2024-10-14 17:34:46.058896] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:47.087 [2024-10-14 17:34:46.069950] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.087 [2024-10-14 17:34:46.069968] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5c79fe3000 00:17:47.087 [2024-10-14 17:34:46.070953] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.071957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.072954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.073962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.074969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.075980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.076985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.077990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.087 [2024-10-14 17:34:46.079003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.087 [2024-10-14 17:34:46.079016] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5c79fd8000 00:17:47.087 [2024-10-14 17:34:46.079930] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.087 [2024-10-14 17:34:46.091363] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:47.087 [2024-10-14 17:34:46.091390] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:47.087 [2024-10-14 17:34:46.094096] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.087 [2024-10-14 17:34:46.094131] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.087 [2024-10-14 17:34:46.094198] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:47.087 [2024-10-14 17:34:46.094215] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:47.087 [2024-10-14 17:34:46.094220] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:47.087 [2024-10-14 17:34:46.098607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:47.087 [2024-10-14 17:34:46.098617] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:47.087 [2024-10-14 17:34:46.098623] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:47.087 [2024-10-14 17:34:46.099118] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.087 [2024-10-14 17:34:46.099125] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:47.087 [2024-10-14 17:34:46.099131] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.087 [2024-10-14 17:34:46.100122] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:47.087 [2024-10-14 17:34:46.100131] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.087 [2024-10-14 17:34:46.101124] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:47.087 [2024-10-14 17:34:46.101131] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:47.087 [2024-10-14 17:34:46.101136] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:47.087 [2024-10-14 17:34:46.101142] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.087 [2024-10-14 17:34:46.101247] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:47.087 [2024-10-14 17:34:46.101251] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.087 [2024-10-14 17:34:46.101256] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:47.087 [2024-10-14 17:34:46.102133] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:47.087 [2024-10-14 17:34:46.103136] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:47.087 [2024-10-14 17:34:46.104150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.087 [2024-10-14 17:34:46.105151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.087 [2024-10-14 17:34:46.105224] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.088 [2024-10-14 17:34:46.106156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:47.088 [2024-10-14 17:34:46.106164] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.088 [2024-10-14 17:34:46.106168] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106184] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:47.088 [2024-10-14 17:34:46.106195] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106207] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.088 [2024-10-14 17:34:46.106211] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.088 [2024-10-14 17:34:46.106215] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.088 [2024-10-14 17:34:46.106226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106288] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:47.088 [2024-10-14 17:34:46.106294] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:47.088 [2024-10-14 17:34:46.106298] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:47.088 [2024-10-14 17:34:46.106302] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.088 [2024-10-14 17:34:46.106306] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:47.088 [2024-10-14 17:34:46.106310] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:47.088 [2024-10-14 17:34:46.106314] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106322] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.088 [2024-10-14 17:34:46.106359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.088 [2024-10-14 17:34:46.106366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.088 [2024-10-14 17:34:46.106373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.088 [2024-10-14 17:34:46.106377] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106386] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106408] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:47.088 [2024-10-14 17:34:46.106412] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106418] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106425] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106490] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106497] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106505] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.088 [2024-10-14 17:34:46.106510] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.088 [2024-10-14 17:34:46.106513] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.088 [2024-10-14 17:34:46.106518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106538] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:47.088 [2024-10-14 17:34:46.106545] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106551] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106557] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.088 [2024-10-14 17:34:46.106561] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.088 [2024-10-14 17:34:46.106564] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.088 [2024-10-14 17:34:46.106569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106605] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106612] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106618] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.088 [2024-10-14 17:34:46.106621] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.088 [2024-10-14 17:34:46.106624] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.088 [2024-10-14 17:34:46.106629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106646] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106651] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106658] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106663] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106667] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106672] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106677] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.088 [2024-10-14 17:34:46.106681] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:47.088 [2024-10-14 17:34:46.106686] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:47.088 [2024-10-14 17:34:46.106702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.088 [2024-10-14 17:34:46.106773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.088 [2024-10-14 17:34:46.106785] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.088 [2024-10-14 17:34:46.106789] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.088 [2024-10-14 17:34:46.106792] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.088 [2024-10-14 17:34:46.106795] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.088 [2024-10-14 17:34:46.106798] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.088 [2024-10-14 17:34:46.106803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.088 [2024-10-14 17:34:46.106810] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.088 [2024-10-14 17:34:46.106813] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.088 [2024-10-14 17:34:46.106816] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.088 [2024-10-14 17:34:46.106822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.089 [2024-10-14 17:34:46.106828] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.089 [2024-10-14 17:34:46.106832] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.089 [2024-10-14 17:34:46.106835] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.089 [2024-10-14 17:34:46.106840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.089 [2024-10-14 17:34:46.106847] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.089 [2024-10-14 17:34:46.106851] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.089 [2024-10-14 17:34:46.106854] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.089 [2024-10-14 17:34:46.106860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.089 [2024-10-14 17:34:46.106866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.089 [2024-10-14 17:34:46.106876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.089 [2024-10-14 17:34:46.106884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.089 [2024-10-14 17:34:46.106891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.089 ===================================================== 00:17:47.089 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.089 ===================================================== 00:17:47.089 Controller Capabilities/Features 00:17:47.089 ================================ 00:17:47.089 Vendor ID: 4e58 00:17:47.089 Subsystem Vendor ID: 4e58 00:17:47.089 Serial Number: SPDK1 00:17:47.089 Model Number: SPDK bdev Controller 00:17:47.089 Firmware Version: 25.01 00:17:47.089 Recommended Arb Burst: 6 00:17:47.089 IEEE OUI Identifier: 8d 6b 50 00:17:47.089 Multi-path I/O 00:17:47.089 May have multiple subsystem ports: Yes 00:17:47.089 May have multiple controllers: Yes 00:17:47.089 Associated with SR-IOV VF: No 00:17:47.089 Max Data Transfer Size: 131072 00:17:47.089 Max Number of Namespaces: 32 00:17:47.089 Max Number of I/O Queues: 127 00:17:47.089 NVMe Specification Version (VS): 1.3 00:17:47.089 NVMe Specification Version (Identify): 1.3 00:17:47.089 Maximum Queue Entries: 256 00:17:47.089 Contiguous Queues Required: Yes 00:17:47.089 Arbitration Mechanisms Supported 00:17:47.089 Weighted Round Robin: Not Supported 00:17:47.089 Vendor Specific: Not Supported 00:17:47.089 Reset Timeout: 15000 ms 00:17:47.089 Doorbell Stride: 4 bytes 00:17:47.089 NVM Subsystem Reset: Not Supported 00:17:47.089 Command Sets Supported 00:17:47.089 NVM Command Set: Supported 00:17:47.089 Boot Partition: Not Supported 00:17:47.089 Memory Page Size Minimum: 4096 bytes 00:17:47.089 Memory Page Size Maximum: 4096 bytes 00:17:47.089 Persistent Memory Region: Not Supported 00:17:47.089 Optional Asynchronous Events Supported 00:17:47.089 Namespace Attribute Notices: Supported 00:17:47.089 Firmware Activation Notices: Not Supported 00:17:47.089 ANA Change Notices: Not Supported 00:17:47.089 PLE Aggregate Log Change Notices: Not Supported 00:17:47.089 LBA Status Info Alert Notices: Not Supported 00:17:47.089 EGE Aggregate Log Change Notices: Not Supported 00:17:47.089 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.089 Zone Descriptor Change Notices: Not Supported 00:17:47.089 Discovery Log Change Notices: Not Supported 00:17:47.089 Controller Attributes 00:17:47.089 128-bit Host Identifier: Supported 00:17:47.089 Non-Operational Permissive Mode: Not Supported 00:17:47.089 NVM Sets: Not Supported 00:17:47.089 Read Recovery Levels: Not Supported 00:17:47.089 Endurance Groups: Not Supported 00:17:47.089 Predictable Latency Mode: Not Supported 00:17:47.089 Traffic Based Keep ALive: Not Supported 00:17:47.089 Namespace Granularity: Not Supported 00:17:47.089 SQ Associations: Not Supported 00:17:47.089 UUID List: Not Supported 00:17:47.089 Multi-Domain Subsystem: Not Supported 00:17:47.089 Fixed Capacity Management: Not Supported 00:17:47.089 Variable Capacity Management: Not Supported 00:17:47.089 Delete Endurance Group: Not Supported 00:17:47.089 Delete NVM Set: Not Supported 00:17:47.089 Extended LBA Formats Supported: Not Supported 00:17:47.089 Flexible Data Placement Supported: Not Supported 00:17:47.089 00:17:47.089 Controller Memory Buffer Support 00:17:47.089 ================================ 00:17:47.089 Supported: No 00:17:47.089 00:17:47.089 Persistent Memory Region Support 00:17:47.089 ================================ 00:17:47.089 Supported: No 00:17:47.089 00:17:47.089 Admin Command Set Attributes 00:17:47.089 ============================ 00:17:47.089 Security Send/Receive: Not Supported 00:17:47.089 Format NVM: Not Supported 00:17:47.089 Firmware Activate/Download: Not Supported 00:17:47.089 Namespace Management: Not Supported 00:17:47.089 Device Self-Test: Not Supported 00:17:47.089 Directives: Not Supported 00:17:47.089 NVMe-MI: Not Supported 00:17:47.089 Virtualization Management: Not Supported 00:17:47.089 Doorbell Buffer Config: Not Supported 00:17:47.089 Get LBA Status Capability: Not Supported 00:17:47.089 Command & Feature Lockdown Capability: Not Supported 00:17:47.089 Abort Command Limit: 4 00:17:47.089 Async Event Request Limit: 4 00:17:47.089 Number of Firmware Slots: N/A 00:17:47.089 Firmware Slot 1 Read-Only: N/A 00:17:47.089 Firmware Activation Without Reset: N/A 00:17:47.089 Multiple Update Detection Support: N/A 00:17:47.089 Firmware Update Granularity: No Information Provided 00:17:47.089 Per-Namespace SMART Log: No 00:17:47.089 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.089 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:47.089 Command Effects Log Page: Supported 00:17:47.089 Get Log Page Extended Data: Supported 00:17:47.089 Telemetry Log Pages: Not Supported 00:17:47.089 Persistent Event Log Pages: Not Supported 00:17:47.089 Supported Log Pages Log Page: May Support 00:17:47.089 Commands Supported & Effects Log Page: Not Supported 00:17:47.089 Feature Identifiers & Effects Log Page:May Support 00:17:47.089 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.089 Data Area 4 for Telemetry Log: Not Supported 00:17:47.089 Error Log Page Entries Supported: 128 00:17:47.089 Keep Alive: Supported 00:17:47.089 Keep Alive Granularity: 10000 ms 00:17:47.089 00:17:47.089 NVM Command Set Attributes 00:17:47.089 ========================== 00:17:47.089 Submission Queue Entry Size 00:17:47.089 Max: 64 00:17:47.089 Min: 64 00:17:47.089 Completion Queue Entry Size 00:17:47.089 Max: 16 00:17:47.089 Min: 16 00:17:47.089 Number of Namespaces: 32 00:17:47.089 Compare Command: Supported 00:17:47.089 Write Uncorrectable Command: Not Supported 00:17:47.089 Dataset Management Command: Supported 00:17:47.089 Write Zeroes Command: Supported 00:17:47.089 Set Features Save Field: Not Supported 00:17:47.089 Reservations: Not Supported 00:17:47.089 Timestamp: Not Supported 00:17:47.089 Copy: Supported 00:17:47.089 Volatile Write Cache: Present 00:17:47.089 Atomic Write Unit (Normal): 1 00:17:47.089 Atomic Write Unit (PFail): 1 00:17:47.089 Atomic Compare & Write Unit: 1 00:17:47.089 Fused Compare & Write: Supported 00:17:47.089 Scatter-Gather List 00:17:47.089 SGL Command Set: Supported (Dword aligned) 00:17:47.089 SGL Keyed: Not Supported 00:17:47.089 SGL Bit Bucket Descriptor: Not Supported 00:17:47.089 SGL Metadata Pointer: Not Supported 00:17:47.089 Oversized SGL: Not Supported 00:17:47.089 SGL Metadata Address: Not Supported 00:17:47.089 SGL Offset: Not Supported 00:17:47.089 Transport SGL Data Block: Not Supported 00:17:47.089 Replay Protected Memory Block: Not Supported 00:17:47.089 00:17:47.089 Firmware Slot Information 00:17:47.089 ========================= 00:17:47.089 Active slot: 1 00:17:47.089 Slot 1 Firmware Revision: 25.01 00:17:47.089 00:17:47.089 00:17:47.089 Commands Supported and Effects 00:17:47.089 ============================== 00:17:47.089 Admin Commands 00:17:47.089 -------------- 00:17:47.089 Get Log Page (02h): Supported 00:17:47.089 Identify (06h): Supported 00:17:47.089 Abort (08h): Supported 00:17:47.089 Set Features (09h): Supported 00:17:47.089 Get Features (0Ah): Supported 00:17:47.089 Asynchronous Event Request (0Ch): Supported 00:17:47.089 Keep Alive (18h): Supported 00:17:47.089 I/O Commands 00:17:47.089 ------------ 00:17:47.089 Flush (00h): Supported LBA-Change 00:17:47.089 Write (01h): Supported LBA-Change 00:17:47.089 Read (02h): Supported 00:17:47.089 Compare (05h): Supported 00:17:47.089 Write Zeroes (08h): Supported LBA-Change 00:17:47.089 Dataset Management (09h): Supported LBA-Change 00:17:47.089 Copy (19h): Supported LBA-Change 00:17:47.089 00:17:47.089 Error Log 00:17:47.089 ========= 00:17:47.089 00:17:47.089 Arbitration 00:17:47.089 =========== 00:17:47.089 Arbitration Burst: 1 00:17:47.089 00:17:47.089 Power Management 00:17:47.089 ================ 00:17:47.089 Number of Power States: 1 00:17:47.089 Current Power State: Power State #0 00:17:47.089 Power State #0: 00:17:47.089 Max Power: 0.00 W 00:17:47.089 Non-Operational State: Operational 00:17:47.089 Entry Latency: Not Reported 00:17:47.090 Exit Latency: Not Reported 00:17:47.090 Relative Read Throughput: 0 00:17:47.090 Relative Read Latency: 0 00:17:47.090 Relative Write Throughput: 0 00:17:47.090 Relative Write Latency: 0 00:17:47.090 Idle Power: Not Reported 00:17:47.090 Active Power: Not Reported 00:17:47.090 Non-Operational Permissive Mode: Not Supported 00:17:47.090 00:17:47.090 Health Information 00:17:47.090 ================== 00:17:47.090 Critical Warnings: 00:17:47.090 Available Spare Space: OK 00:17:47.090 Temperature: OK 00:17:47.090 Device Reliability: OK 00:17:47.090 Read Only: No 00:17:47.090 Volatile Memory Backup: OK 00:17:47.090 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.090 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.090 Available Spare: 0% 00:17:47.090 Available Sp[2024-10-14 17:34:46.106970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.090 [2024-10-14 17:34:46.106979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.090 [2024-10-14 17:34:46.107002] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:47.090 [2024-10-14 17:34:46.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.090 [2024-10-14 17:34:46.107016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.090 [2024-10-14 17:34:46.107021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.090 [2024-10-14 17:34:46.107026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.090 [2024-10-14 17:34:46.107166] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.090 [2024-10-14 17:34:46.107176] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:47.090 [2024-10-14 17:34:46.108165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.090 [2024-10-14 17:34:46.108212] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:47.090 [2024-10-14 17:34:46.108218] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:47.090 [2024-10-14 17:34:46.109174] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:47.090 [2024-10-14 17:34:46.109185] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:47.090 [2024-10-14 17:34:46.109235] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:47.090 [2024-10-14 17:34:46.110206] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.090 are Threshold: 0% 00:17:47.090 Life Percentage Used: 0% 00:17:47.090 Data Units Read: 0 00:17:47.090 Data Units Written: 0 00:17:47.090 Host Read Commands: 0 00:17:47.090 Host Write Commands: 0 00:17:47.090 Controller Busy Time: 0 minutes 00:17:47.090 Power Cycles: 0 00:17:47.090 Power On Hours: 0 hours 00:17:47.090 Unsafe Shutdowns: 0 00:17:47.090 Unrecoverable Media Errors: 0 00:17:47.090 Lifetime Error Log Entries: 0 00:17:47.090 Warning Temperature Time: 0 minutes 00:17:47.090 Critical Temperature Time: 0 minutes 00:17:47.090 00:17:47.090 Number of Queues 00:17:47.090 ================ 00:17:47.090 Number of I/O Submission Queues: 127 00:17:47.090 Number of I/O Completion Queues: 127 00:17:47.090 00:17:47.090 Active Namespaces 00:17:47.090 ================= 00:17:47.090 Namespace ID:1 00:17:47.090 Error Recovery Timeout: Unlimited 00:17:47.090 Command Set Identifier: NVM (00h) 00:17:47.090 Deallocate: Supported 00:17:47.090 Deallocated/Unwritten Error: Not Supported 00:17:47.090 Deallocated Read Value: Unknown 00:17:47.090 Deallocate in Write Zeroes: Not Supported 00:17:47.090 Deallocated Guard Field: 0xFFFF 00:17:47.090 Flush: Supported 00:17:47.090 Reservation: Supported 00:17:47.090 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.090 Size (in LBAs): 131072 (0GiB) 00:17:47.090 Capacity (in LBAs): 131072 (0GiB) 00:17:47.090 Utilization (in LBAs): 131072 (0GiB) 00:17:47.090 NGUID: CF8600A8B9DD4432B068F1ABE39ECEED 00:17:47.090 UUID: cf8600a8-b9dd-4432-b068-f1abe39eceed 00:17:47.090 Thin Provisioning: Not Supported 00:17:47.090 Per-NS Atomic Units: Yes 00:17:47.090 Atomic Boundary Size (Normal): 0 00:17:47.090 Atomic Boundary Size (PFail): 0 00:17:47.090 Atomic Boundary Offset: 0 00:17:47.090 Maximum Single Source Range Length: 65535 00:17:47.090 Maximum Copy Length: 65535 00:17:47.090 Maximum Source Range Count: 1 00:17:47.090 NGUID/EUI64 Never Reused: No 00:17:47.090 Namespace Write Protected: No 00:17:47.090 Number of LBA Formats: 1 00:17:47.090 Current LBA Format: LBA Format #00 00:17:47.090 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.090 00:17:47.090 17:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:47.349 [2024-10-14 17:34:46.327407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:52.622 Initializing NVMe Controllers 00:17:52.622 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:52.622 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:52.622 Initialization complete. Launching workers. 00:17:52.622 ======================================================== 00:17:52.622 Latency(us) 00:17:52.622 Device Information : IOPS MiB/s Average min max 00:17:52.622 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39907.46 155.89 3207.02 951.67 6736.31 00:17:52.622 ======================================================== 00:17:52.622 Total : 39907.46 155.89 3207.02 951.67 6736.31 00:17:52.622 00:17:52.622 [2024-10-14 17:34:51.344672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:52.622 17:34:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:52.622 [2024-10-14 17:34:51.572778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:57.894 Initializing NVMe Controllers 00:17:57.894 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:57.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:57.894 Initialization complete. Launching workers. 00:17:57.894 ======================================================== 00:17:57.894 Latency(us) 00:17:57.894 Device Information : IOPS MiB/s Average min max 00:17:57.894 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.55 62.68 7976.61 6399.93 8562.84 00:17:57.894 ======================================================== 00:17:57.894 Total : 16045.55 62.68 7976.61 6399.93 8562.84 00:17:57.894 00:17:57.894 [2024-10-14 17:34:56.605809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.894 17:34:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:57.894 [2024-10-14 17:34:56.804753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.167 [2024-10-14 17:35:01.905005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.167 Initializing NVMe Controllers 00:18:03.167 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.167 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.167 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:03.167 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:03.167 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:03.167 Initialization complete. Launching workers. 00:18:03.167 Starting thread on core 2 00:18:03.167 Starting thread on core 3 00:18:03.167 Starting thread on core 1 00:18:03.167 17:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:03.167 [2024-10-14 17:35:02.187996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.463 [2024-10-14 17:35:05.261494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.463 Initializing NVMe Controllers 00:18:06.463 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.463 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.463 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:06.463 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:06.463 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:06.463 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:06.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:06.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:06.463 Initialization complete. Launching workers. 00:18:06.463 Starting thread on core 1 with urgent priority queue 00:18:06.463 Starting thread on core 2 with urgent priority queue 00:18:06.463 Starting thread on core 3 with urgent priority queue 00:18:06.463 Starting thread on core 0 with urgent priority queue 00:18:06.463 SPDK bdev Controller (SPDK1 ) core 0: 8190.67 IO/s 12.21 secs/100000 ios 00:18:06.463 SPDK bdev Controller (SPDK1 ) core 1: 8053.33 IO/s 12.42 secs/100000 ios 00:18:06.463 SPDK bdev Controller (SPDK1 ) core 2: 8370.00 IO/s 11.95 secs/100000 ios 00:18:06.463 SPDK bdev Controller (SPDK1 ) core 3: 8042.67 IO/s 12.43 secs/100000 ios 00:18:06.463 ======================================================== 00:18:06.463 00:18:06.463 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:06.463 [2024-10-14 17:35:05.532086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.463 Initializing NVMe Controllers 00:18:06.463 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.463 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.463 Namespace ID: 1 size: 0GB 00:18:06.463 Initialization complete. 00:18:06.463 INFO: using host memory buffer for IO 00:18:06.463 Hello world! 00:18:06.463 [2024-10-14 17:35:05.565309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.745 17:35:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:06.745 [2024-10-14 17:35:05.829116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.749 Initializing NVMe Controllers 00:18:07.750 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.750 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.750 Initialization complete. Launching workers. 00:18:07.750 submit (in ns) avg, min, max = 6456.2, 3129.5, 4000446.7 00:18:07.750 complete (in ns) avg, min, max = 19598.4, 1734.3, 3998797.1 00:18:07.750 00:18:07.750 Submit histogram 00:18:07.750 ================ 00:18:07.750 Range in us Cumulative Count 00:18:07.750 3.124 - 3.139: 0.0178% ( 3) 00:18:07.750 3.139 - 3.154: 0.0415% ( 4) 00:18:07.750 3.154 - 3.170: 0.0593% ( 3) 00:18:07.750 3.170 - 3.185: 0.1008% ( 7) 00:18:07.750 3.185 - 3.200: 0.2668% ( 28) 00:18:07.750 3.200 - 3.215: 1.3516% ( 183) 00:18:07.750 3.215 - 3.230: 4.9143% ( 601) 00:18:07.750 3.230 - 3.246: 10.0836% ( 872) 00:18:07.750 3.246 - 3.261: 15.6263% ( 935) 00:18:07.750 3.261 - 3.276: 22.1293% ( 1097) 00:18:07.750 3.276 - 3.291: 28.4486% ( 1066) 00:18:07.750 3.291 - 3.307: 34.1455% ( 961) 00:18:07.750 3.307 - 3.322: 40.4114% ( 1057) 00:18:07.750 3.322 - 3.337: 46.3987% ( 1010) 00:18:07.750 3.337 - 3.352: 51.9118% ( 930) 00:18:07.750 3.352 - 3.368: 57.4545% ( 935) 00:18:07.750 3.368 - 3.383: 65.5166% ( 1360) 00:18:07.750 3.383 - 3.398: 71.1423% ( 949) 00:18:07.750 3.398 - 3.413: 76.5546% ( 913) 00:18:07.750 3.413 - 3.429: 81.0896% ( 765) 00:18:07.750 3.429 - 3.444: 84.0892% ( 506) 00:18:07.750 3.444 - 3.459: 86.0988% ( 339) 00:18:07.750 3.459 - 3.474: 86.9346% ( 141) 00:18:07.750 3.474 - 3.490: 87.5748% ( 108) 00:18:07.750 3.490 - 3.505: 88.0432% ( 79) 00:18:07.750 3.505 - 3.520: 88.5470% ( 85) 00:18:07.750 3.520 - 3.535: 89.1458% ( 101) 00:18:07.750 3.535 - 3.550: 89.8571% ( 120) 00:18:07.750 3.550 - 3.566: 90.6159% ( 128) 00:18:07.750 3.566 - 3.581: 91.6711% ( 178) 00:18:07.750 3.581 - 3.596: 92.6730% ( 169) 00:18:07.750 3.596 - 3.611: 93.5977% ( 156) 00:18:07.750 3.611 - 3.627: 94.6055% ( 170) 00:18:07.750 3.627 - 3.642: 95.6370% ( 174) 00:18:07.750 3.642 - 3.657: 96.5143% ( 148) 00:18:07.750 3.657 - 3.672: 97.3561% ( 142) 00:18:07.750 3.672 - 3.688: 97.9667% ( 103) 00:18:07.750 3.688 - 3.703: 98.3402% ( 63) 00:18:07.750 3.703 - 3.718: 98.7136% ( 63) 00:18:07.750 3.718 - 3.733: 98.9567% ( 41) 00:18:07.750 3.733 - 3.749: 99.2590% ( 51) 00:18:07.750 3.749 - 3.764: 99.3835% ( 21) 00:18:07.750 3.764 - 3.779: 99.4665% ( 14) 00:18:07.750 3.779 - 3.794: 99.5495% ( 14) 00:18:07.750 3.794 - 3.810: 99.5850% ( 6) 00:18:07.750 3.810 - 3.825: 99.6028% ( 3) 00:18:07.750 3.825 - 3.840: 99.6206% ( 3) 00:18:07.750 3.886 - 3.901: 99.6265% ( 1) 00:18:07.750 3.992 - 4.023: 99.6325% ( 1) 00:18:07.750 5.029 - 5.059: 99.6384% ( 1) 00:18:07.750 5.120 - 5.150: 99.6443% ( 1) 00:18:07.750 5.333 - 5.364: 99.6502% ( 1) 00:18:07.750 5.425 - 5.455: 99.6562% ( 1) 00:18:07.750 5.486 - 5.516: 99.6621% ( 1) 00:18:07.750 5.547 - 5.577: 99.6740% ( 2) 00:18:07.750 5.577 - 5.608: 99.6799% ( 1) 00:18:07.750 5.608 - 5.638: 99.6858% ( 1) 00:18:07.750 5.638 - 5.669: 99.6917% ( 1) 00:18:07.750 5.669 - 5.699: 99.6977% ( 1) 00:18:07.750 5.821 - 5.851: 99.7036% ( 1) 00:18:07.750 6.004 - 6.034: 99.7095% ( 1) 00:18:07.750 6.095 - 6.126: 99.7155% ( 1) 00:18:07.750 6.126 - 6.156: 99.7214% ( 1) 00:18:07.750 6.156 - 6.187: 99.7273% ( 1) 00:18:07.750 6.400 - 6.430: 99.7332% ( 1) 00:18:07.750 6.491 - 6.522: 99.7392% ( 1) 00:18:07.750 6.552 - 6.583: 99.7451% ( 1) 00:18:07.750 6.644 - 6.674: 99.7570% ( 2) 00:18:07.750 6.705 - 6.735: 99.7629% ( 1) 00:18:07.750 6.796 - 6.827: 99.7688% ( 1) 00:18:07.750 6.857 - 6.888: 99.7807% ( 2) 00:18:07.750 6.888 - 6.918: 99.7866% ( 1) 00:18:07.750 6.918 - 6.949: 99.7925% ( 1) 00:18:07.750 6.979 - 7.010: 99.7984% ( 1) 00:18:07.750 7.010 - 7.040: 99.8044% ( 1) 00:18:07.750 7.101 - 7.131: 99.8103% ( 1) 00:18:07.750 7.131 - 7.162: 99.8162% ( 1) 00:18:07.750 7.162 - 7.192: 99.8222% ( 1) 00:18:07.750 7.284 - 7.314: 99.8281% ( 1) 00:18:07.750 [2024-10-14 17:35:06.854538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.009 7.375 - 7.406: 99.8340% ( 1) 00:18:08.009 7.436 - 7.467: 99.8399% ( 1) 00:18:08.009 7.558 - 7.589: 99.8459% ( 1) 00:18:08.009 7.619 - 7.650: 99.8518% ( 1) 00:18:08.009 7.680 - 7.710: 99.8577% ( 1) 00:18:08.009 7.741 - 7.771: 99.8637% ( 1) 00:18:08.009 7.771 - 7.802: 99.8755% ( 2) 00:18:08.009 7.985 - 8.046: 99.8814% ( 1) 00:18:08.009 8.290 - 8.350: 99.8874% ( 1) 00:18:08.009 8.411 - 8.472: 99.8933% ( 1) 00:18:08.009 13.044 - 13.105: 99.8992% ( 1) 00:18:08.009 15.360 - 15.421: 99.9052% ( 1) 00:18:08.009 15.726 - 15.848: 99.9111% ( 1) 00:18:08.009 18.895 - 19.017: 99.9170% ( 1) 00:18:08.009 19.017 - 19.139: 99.9229% ( 1) 00:18:08.009 3994.575 - 4025.783: 100.0000% ( 13) 00:18:08.009 00:18:08.009 Complete histogram 00:18:08.009 ================== 00:18:08.009 Range in us Cumulative Count 00:18:08.009 1.730 - 1.737: 0.0059% ( 1) 00:18:08.009 1.737 - 1.745: 0.0178% ( 2) 00:18:08.009 1.745 - 1.752: 0.0237% ( 1) 00:18:08.009 1.752 - 1.760: 0.0296% ( 1) 00:18:08.009 1.760 - 1.768: 0.1423% ( 19) 00:18:08.009 1.768 - 1.775: 0.6995% ( 94) 00:18:08.009 1.775 - 1.783: 1.6361% ( 158) 00:18:08.009 1.783 - 1.790: 2.5313% ( 151) 00:18:08.009 1.790 - 1.798: 3.0707% ( 91) 00:18:08.009 1.798 - 1.806: 3.5272% ( 77) 00:18:08.009 1.806 - 1.813: 4.3334% ( 136) 00:18:08.009 1.813 - 1.821: 12.6208% ( 1398) 00:18:08.009 1.821 - 1.829: 40.9034% ( 4771) 00:18:08.009 1.829 - 1.836: 71.2787% ( 5124) 00:18:08.009 1.836 - 1.844: 85.4882% ( 2397) 00:18:08.009 1.844 - 1.851: 90.1832% ( 792) 00:18:08.009 1.851 - 1.859: 93.3250% ( 530) 00:18:08.009 1.859 - 1.867: 95.5303% ( 372) 00:18:08.009 1.867 - 1.874: 96.3720% ( 142) 00:18:08.009 1.874 - 1.882: 96.7811% ( 69) 00:18:08.009 1.882 - 1.890: 97.0893% ( 52) 00:18:08.009 1.890 - 1.897: 97.4628% ( 63) 00:18:08.009 1.897 - 1.905: 97.9845% ( 88) 00:18:08.009 1.905 - 1.912: 98.5358% ( 93) 00:18:08.009 1.912 - 1.920: 98.8855% ( 59) 00:18:08.009 1.920 - 1.928: 99.0930% ( 35) 00:18:08.009 1.928 - 1.935: 99.1819% ( 15) 00:18:08.009 1.935 - 1.943: 99.2116% ( 5) 00:18:08.009 1.943 - 1.950: 99.2649% ( 9) 00:18:08.009 1.950 - 1.966: 99.2946% ( 5) 00:18:08.009 1.996 - 2.011: 99.3183% ( 4) 00:18:08.009 2.042 - 2.057: 99.3242% ( 1) 00:18:08.009 2.088 - 2.103: 99.3361% ( 2) 00:18:08.009 2.118 - 2.133: 99.3420% ( 1) 00:18:08.009 2.133 - 2.149: 99.3479% ( 1) 00:18:08.009 2.149 - 2.164: 99.3538% ( 1) 00:18:08.009 2.210 - 2.225: 99.3598% ( 1) 00:18:08.010 2.270 - 2.286: 99.3657% ( 1) 00:18:08.010 2.316 - 2.331: 99.3716% ( 1) 00:18:08.010 3.581 - 3.596: 99.3776% ( 1) 00:18:08.010 3.992 - 4.023: 99.3835% ( 1) 00:18:08.010 4.053 - 4.084: 99.3894% ( 1) 00:18:08.010 4.145 - 4.175: 99.3953% ( 1) 00:18:08.010 4.175 - 4.206: 99.4013% ( 1) 00:18:08.010 4.236 - 4.267: 99.4072% ( 1) 00:18:08.010 4.267 - 4.297: 99.4131% ( 1) 00:18:08.010 4.297 - 4.328: 99.4191% ( 1) 00:18:08.010 4.358 - 4.389: 99.4250% ( 1) 00:18:08.010 4.419 - 4.450: 99.4368% ( 2) 00:18:08.010 4.450 - 4.480: 99.4428% ( 1) 00:18:08.010 4.602 - 4.632: 99.4487% ( 1) 00:18:08.010 4.785 - 4.815: 99.4546% ( 1) 00:18:08.010 4.815 - 4.846: 99.4605% ( 1) 00:18:08.010 5.090 - 5.120: 99.4665% ( 1) 00:18:08.010 5.181 - 5.211: 99.4724% ( 1) 00:18:08.010 5.364 - 5.394: 99.4783% ( 1) 00:18:08.010 5.455 - 5.486: 99.4843% ( 1) 00:18:08.010 5.547 - 5.577: 99.4902% ( 1) 00:18:08.010 5.669 - 5.699: 99.5020% ( 2) 00:18:08.010 5.943 - 5.973: 99.5080% ( 1) 00:18:08.010 6.461 - 6.491: 99.5198% ( 2) 00:18:08.010 6.552 - 6.583: 99.5258% ( 1) 00:18:08.010 6.583 - 6.613: 99.5317% ( 1) 00:18:08.010 6.644 - 6.674: 99.5376% ( 1) 00:18:08.010 6.766 - 6.796: 99.5435% ( 1) 00:18:08.010 8.350 - 8.411: 99.5495% ( 1) 00:18:08.010 10.301 - 10.362: 99.5554% ( 1) 00:18:08.010 3978.971 - 3994.575: 99.5673% ( 2) 00:18:08.010 3994.575 - 4025.783: 100.0000% ( 73) 00:18:08.010 00:18:08.010 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:08.010 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:08.010 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:08.010 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:08.010 17:35:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.010 [ 00:18:08.010 { 00:18:08.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.010 "subtype": "Discovery", 00:18:08.010 "listen_addresses": [], 00:18:08.010 "allow_any_host": true, 00:18:08.010 "hosts": [] 00:18:08.010 }, 00:18:08.010 { 00:18:08.010 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.010 "subtype": "NVMe", 00:18:08.010 "listen_addresses": [ 00:18:08.010 { 00:18:08.010 "trtype": "VFIOUSER", 00:18:08.010 "adrfam": "IPv4", 00:18:08.010 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.010 "trsvcid": "0" 00:18:08.010 } 00:18:08.010 ], 00:18:08.010 "allow_any_host": true, 00:18:08.010 "hosts": [], 00:18:08.010 "serial_number": "SPDK1", 00:18:08.010 "model_number": "SPDK bdev Controller", 00:18:08.010 "max_namespaces": 32, 00:18:08.010 "min_cntlid": 1, 00:18:08.010 "max_cntlid": 65519, 00:18:08.010 "namespaces": [ 00:18:08.010 { 00:18:08.010 "nsid": 1, 00:18:08.010 "bdev_name": "Malloc1", 00:18:08.010 "name": "Malloc1", 00:18:08.010 "nguid": "CF8600A8B9DD4432B068F1ABE39ECEED", 00:18:08.010 "uuid": "cf8600a8-b9dd-4432-b068-f1abe39eceed" 00:18:08.010 } 00:18:08.010 ] 00:18:08.010 }, 00:18:08.010 { 00:18:08.010 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.010 "subtype": "NVMe", 00:18:08.010 "listen_addresses": [ 00:18:08.010 { 00:18:08.010 "trtype": "VFIOUSER", 00:18:08.010 "adrfam": "IPv4", 00:18:08.010 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.010 "trsvcid": "0" 00:18:08.010 } 00:18:08.010 ], 00:18:08.010 "allow_any_host": true, 00:18:08.010 "hosts": [], 00:18:08.010 "serial_number": "SPDK2", 00:18:08.010 "model_number": "SPDK bdev Controller", 00:18:08.010 "max_namespaces": 32, 00:18:08.010 "min_cntlid": 1, 00:18:08.010 "max_cntlid": 65519, 00:18:08.010 "namespaces": [ 00:18:08.010 { 00:18:08.010 "nsid": 1, 00:18:08.010 "bdev_name": "Malloc2", 00:18:08.010 "name": "Malloc2", 00:18:08.010 "nguid": "955921D198034A13A2A77CAFB1E3B368", 00:18:08.010 "uuid": "955921d1-9803-4a13-a2a7-7cafb1e3b368" 00:18:08.010 } 00:18:08.010 ] 00:18:08.010 } 00:18:08.010 ] 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1078721 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:08.010 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:08.269 [2024-10-14 17:35:07.237945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.269 Malloc3 00:18:08.269 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:08.528 [2024-10-14 17:35:07.480792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.528 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.528 Asynchronous Event Request test 00:18:08.528 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.528 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.528 Registering asynchronous event callbacks... 00:18:08.528 Starting namespace attribute notice tests for all controllers... 00:18:08.528 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:08.528 aer_cb - Changed Namespace 00:18:08.528 Cleaning up... 00:18:08.528 [ 00:18:08.528 { 00:18:08.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.528 "subtype": "Discovery", 00:18:08.528 "listen_addresses": [], 00:18:08.528 "allow_any_host": true, 00:18:08.528 "hosts": [] 00:18:08.528 }, 00:18:08.528 { 00:18:08.528 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.528 "subtype": "NVMe", 00:18:08.528 "listen_addresses": [ 00:18:08.528 { 00:18:08.528 "trtype": "VFIOUSER", 00:18:08.528 "adrfam": "IPv4", 00:18:08.528 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.528 "trsvcid": "0" 00:18:08.528 } 00:18:08.528 ], 00:18:08.528 "allow_any_host": true, 00:18:08.528 "hosts": [], 00:18:08.528 "serial_number": "SPDK1", 00:18:08.528 "model_number": "SPDK bdev Controller", 00:18:08.528 "max_namespaces": 32, 00:18:08.528 "min_cntlid": 1, 00:18:08.528 "max_cntlid": 65519, 00:18:08.528 "namespaces": [ 00:18:08.528 { 00:18:08.528 "nsid": 1, 00:18:08.528 "bdev_name": "Malloc1", 00:18:08.528 "name": "Malloc1", 00:18:08.528 "nguid": "CF8600A8B9DD4432B068F1ABE39ECEED", 00:18:08.528 "uuid": "cf8600a8-b9dd-4432-b068-f1abe39eceed" 00:18:08.528 }, 00:18:08.528 { 00:18:08.528 "nsid": 2, 00:18:08.528 "bdev_name": "Malloc3", 00:18:08.528 "name": "Malloc3", 00:18:08.528 "nguid": "4B7E4A66E8D448F29DAC4EFE7F1854A8", 00:18:08.528 "uuid": "4b7e4a66-e8d4-48f2-9dac-4efe7f1854a8" 00:18:08.528 } 00:18:08.528 ] 00:18:08.528 }, 00:18:08.528 { 00:18:08.528 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.528 "subtype": "NVMe", 00:18:08.528 "listen_addresses": [ 00:18:08.528 { 00:18:08.528 "trtype": "VFIOUSER", 00:18:08.528 "adrfam": "IPv4", 00:18:08.528 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.528 "trsvcid": "0" 00:18:08.528 } 00:18:08.528 ], 00:18:08.528 "allow_any_host": true, 00:18:08.528 "hosts": [], 00:18:08.528 "serial_number": "SPDK2", 00:18:08.528 "model_number": "SPDK bdev Controller", 00:18:08.528 "max_namespaces": 32, 00:18:08.528 "min_cntlid": 1, 00:18:08.528 "max_cntlid": 65519, 00:18:08.528 "namespaces": [ 00:18:08.528 { 00:18:08.528 "nsid": 1, 00:18:08.528 "bdev_name": "Malloc2", 00:18:08.528 "name": "Malloc2", 00:18:08.528 "nguid": "955921D198034A13A2A77CAFB1E3B368", 00:18:08.528 "uuid": "955921d1-9803-4a13-a2a7-7cafb1e3b368" 00:18:08.528 } 00:18:08.528 ] 00:18:08.528 } 00:18:08.528 ] 00:18:08.789 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1078721 00:18:08.789 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.789 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:08.789 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:08.789 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:08.789 [2024-10-14 17:35:07.712639] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:18:08.789 [2024-10-14 17:35:07.712674] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078755 ] 00:18:08.789 [2024-10-14 17:35:07.739804] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:08.789 [2024-10-14 17:35:07.744043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:08.789 [2024-10-14 17:35:07.744065] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2147862000 00:18:08.789 [2024-10-14 17:35:07.745051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.746053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.747059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.748065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.749074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.750078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.751085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.752092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:08.789 [2024-10-14 17:35:07.753105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:08.789 [2024-10-14 17:35:07.753119] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2147857000 00:18:08.789 [2024-10-14 17:35:07.754037] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:08.789 [2024-10-14 17:35:07.765868] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:08.789 [2024-10-14 17:35:07.765892] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:08.789 [2024-10-14 17:35:07.770977] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:08.789 [2024-10-14 17:35:07.771012] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:08.789 [2024-10-14 17:35:07.771075] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:08.789 [2024-10-14 17:35:07.771091] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:08.789 [2024-10-14 17:35:07.771096] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:08.789 [2024-10-14 17:35:07.771976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:08.789 [2024-10-14 17:35:07.771986] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:08.789 [2024-10-14 17:35:07.771992] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:08.789 [2024-10-14 17:35:07.772986] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:08.789 [2024-10-14 17:35:07.772995] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:08.789 [2024-10-14 17:35:07.773001] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:08.789 [2024-10-14 17:35:07.773994] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:08.789 [2024-10-14 17:35:07.774006] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:08.789 [2024-10-14 17:35:07.774999] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:08.789 [2024-10-14 17:35:07.775007] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:08.789 [2024-10-14 17:35:07.775012] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:08.789 [2024-10-14 17:35:07.775017] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:08.789 [2024-10-14 17:35:07.775122] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:08.789 [2024-10-14 17:35:07.775126] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:08.789 [2024-10-14 17:35:07.775131] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:08.790 [2024-10-14 17:35:07.776004] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:08.790 [2024-10-14 17:35:07.777021] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:08.790 [2024-10-14 17:35:07.778030] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:08.790 [2024-10-14 17:35:07.779029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:08.790 [2024-10-14 17:35:07.779069] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:08.790 [2024-10-14 17:35:07.780035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:08.790 [2024-10-14 17:35:07.780044] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:08.790 [2024-10-14 17:35:07.780049] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.780065] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:08.790 [2024-10-14 17:35:07.780076] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.780086] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:08.790 [2024-10-14 17:35:07.780090] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:08.790 [2024-10-14 17:35:07.780093] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.790 [2024-10-14 17:35:07.780104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.787610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.787621] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:08.790 [2024-10-14 17:35:07.787625] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:08.790 [2024-10-14 17:35:07.787631] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:08.790 [2024-10-14 17:35:07.787635] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:08.790 [2024-10-14 17:35:07.787639] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:08.790 [2024-10-14 17:35:07.787643] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:08.790 [2024-10-14 17:35:07.787647] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.787656] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.787665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.795607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.795619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.790 [2024-10-14 17:35:07.795626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.790 [2024-10-14 17:35:07.795633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.790 [2024-10-14 17:35:07.795640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.790 [2024-10-14 17:35:07.795645] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.795653] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.795661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.803616] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:08.790 [2024-10-14 17:35:07.803621] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.803626] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.803633] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.803642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.811609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.811664] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.811671] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.811677] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:08.790 [2024-10-14 17:35:07.811684] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:08.790 [2024-10-14 17:35:07.811687] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.790 [2024-10-14 17:35:07.811693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.819607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.819618] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:08.790 [2024-10-14 17:35:07.819626] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.819633] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.819639] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:08.790 [2024-10-14 17:35:07.819643] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:08.790 [2024-10-14 17:35:07.819646] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.790 [2024-10-14 17:35:07.819651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.827606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.827620] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.827627] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.827633] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:08.790 [2024-10-14 17:35:07.827638] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:08.790 [2024-10-14 17:35:07.827641] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.790 [2024-10-14 17:35:07.827646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.835608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.835621] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835628] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835638] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835643] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835647] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835652] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835657] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:08.790 [2024-10-14 17:35:07.835663] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:08.790 [2024-10-14 17:35:07.835668] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:08.790 [2024-10-14 17:35:07.835684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.843608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.843621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.851608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.851620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.859607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.859621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:08.790 [2024-10-14 17:35:07.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:08.790 [2024-10-14 17:35:07.867624] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:08.790 [2024-10-14 17:35:07.867629] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:08.790 [2024-10-14 17:35:07.867632] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:08.790 [2024-10-14 17:35:07.867635] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:08.790 [2024-10-14 17:35:07.867638] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:08.790 [2024-10-14 17:35:07.867644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:08.790 [2024-10-14 17:35:07.867651] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:08.790 [2024-10-14 17:35:07.867654] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:08.790 [2024-10-14 17:35:07.867657] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.791 [2024-10-14 17:35:07.867663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:08.791 [2024-10-14 17:35:07.867669] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:08.791 [2024-10-14 17:35:07.867672] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:08.791 [2024-10-14 17:35:07.867675] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.791 [2024-10-14 17:35:07.867681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:08.791 [2024-10-14 17:35:07.867687] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:08.791 [2024-10-14 17:35:07.867691] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:08.791 [2024-10-14 17:35:07.867694] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:08.791 [2024-10-14 17:35:07.867699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:08.791 [2024-10-14 17:35:07.875608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:08.791 [2024-10-14 17:35:07.875623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:08.791 [2024-10-14 17:35:07.875632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:08.791 [2024-10-14 17:35:07.875639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:08.791 ===================================================== 00:18:08.791 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:08.791 ===================================================== 00:18:08.791 Controller Capabilities/Features 00:18:08.791 ================================ 00:18:08.791 Vendor ID: 4e58 00:18:08.791 Subsystem Vendor ID: 4e58 00:18:08.791 Serial Number: SPDK2 00:18:08.791 Model Number: SPDK bdev Controller 00:18:08.791 Firmware Version: 25.01 00:18:08.791 Recommended Arb Burst: 6 00:18:08.791 IEEE OUI Identifier: 8d 6b 50 00:18:08.791 Multi-path I/O 00:18:08.791 May have multiple subsystem ports: Yes 00:18:08.791 May have multiple controllers: Yes 00:18:08.791 Associated with SR-IOV VF: No 00:18:08.791 Max Data Transfer Size: 131072 00:18:08.791 Max Number of Namespaces: 32 00:18:08.791 Max Number of I/O Queues: 127 00:18:08.791 NVMe Specification Version (VS): 1.3 00:18:08.791 NVMe Specification Version (Identify): 1.3 00:18:08.791 Maximum Queue Entries: 256 00:18:08.791 Contiguous Queues Required: Yes 00:18:08.791 Arbitration Mechanisms Supported 00:18:08.791 Weighted Round Robin: Not Supported 00:18:08.791 Vendor Specific: Not Supported 00:18:08.791 Reset Timeout: 15000 ms 00:18:08.791 Doorbell Stride: 4 bytes 00:18:08.791 NVM Subsystem Reset: Not Supported 00:18:08.791 Command Sets Supported 00:18:08.791 NVM Command Set: Supported 00:18:08.791 Boot Partition: Not Supported 00:18:08.791 Memory Page Size Minimum: 4096 bytes 00:18:08.791 Memory Page Size Maximum: 4096 bytes 00:18:08.791 Persistent Memory Region: Not Supported 00:18:08.791 Optional Asynchronous Events Supported 00:18:08.791 Namespace Attribute Notices: Supported 00:18:08.791 Firmware Activation Notices: Not Supported 00:18:08.791 ANA Change Notices: Not Supported 00:18:08.791 PLE Aggregate Log Change Notices: Not Supported 00:18:08.791 LBA Status Info Alert Notices: Not Supported 00:18:08.791 EGE Aggregate Log Change Notices: Not Supported 00:18:08.791 Normal NVM Subsystem Shutdown event: Not Supported 00:18:08.791 Zone Descriptor Change Notices: Not Supported 00:18:08.791 Discovery Log Change Notices: Not Supported 00:18:08.791 Controller Attributes 00:18:08.791 128-bit Host Identifier: Supported 00:18:08.791 Non-Operational Permissive Mode: Not Supported 00:18:08.791 NVM Sets: Not Supported 00:18:08.791 Read Recovery Levels: Not Supported 00:18:08.791 Endurance Groups: Not Supported 00:18:08.791 Predictable Latency Mode: Not Supported 00:18:08.791 Traffic Based Keep ALive: Not Supported 00:18:08.791 Namespace Granularity: Not Supported 00:18:08.791 SQ Associations: Not Supported 00:18:08.791 UUID List: Not Supported 00:18:08.791 Multi-Domain Subsystem: Not Supported 00:18:08.791 Fixed Capacity Management: Not Supported 00:18:08.791 Variable Capacity Management: Not Supported 00:18:08.791 Delete Endurance Group: Not Supported 00:18:08.791 Delete NVM Set: Not Supported 00:18:08.791 Extended LBA Formats Supported: Not Supported 00:18:08.791 Flexible Data Placement Supported: Not Supported 00:18:08.791 00:18:08.791 Controller Memory Buffer Support 00:18:08.791 ================================ 00:18:08.791 Supported: No 00:18:08.791 00:18:08.791 Persistent Memory Region Support 00:18:08.791 ================================ 00:18:08.791 Supported: No 00:18:08.791 00:18:08.791 Admin Command Set Attributes 00:18:08.791 ============================ 00:18:08.791 Security Send/Receive: Not Supported 00:18:08.791 Format NVM: Not Supported 00:18:08.791 Firmware Activate/Download: Not Supported 00:18:08.791 Namespace Management: Not Supported 00:18:08.791 Device Self-Test: Not Supported 00:18:08.791 Directives: Not Supported 00:18:08.791 NVMe-MI: Not Supported 00:18:08.791 Virtualization Management: Not Supported 00:18:08.791 Doorbell Buffer Config: Not Supported 00:18:08.791 Get LBA Status Capability: Not Supported 00:18:08.791 Command & Feature Lockdown Capability: Not Supported 00:18:08.791 Abort Command Limit: 4 00:18:08.791 Async Event Request Limit: 4 00:18:08.791 Number of Firmware Slots: N/A 00:18:08.791 Firmware Slot 1 Read-Only: N/A 00:18:08.791 Firmware Activation Without Reset: N/A 00:18:08.791 Multiple Update Detection Support: N/A 00:18:08.791 Firmware Update Granularity: No Information Provided 00:18:08.791 Per-Namespace SMART Log: No 00:18:08.791 Asymmetric Namespace Access Log Page: Not Supported 00:18:08.791 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:08.791 Command Effects Log Page: Supported 00:18:08.791 Get Log Page Extended Data: Supported 00:18:08.791 Telemetry Log Pages: Not Supported 00:18:08.791 Persistent Event Log Pages: Not Supported 00:18:08.791 Supported Log Pages Log Page: May Support 00:18:08.791 Commands Supported & Effects Log Page: Not Supported 00:18:08.791 Feature Identifiers & Effects Log Page:May Support 00:18:08.791 NVMe-MI Commands & Effects Log Page: May Support 00:18:08.791 Data Area 4 for Telemetry Log: Not Supported 00:18:08.791 Error Log Page Entries Supported: 128 00:18:08.791 Keep Alive: Supported 00:18:08.791 Keep Alive Granularity: 10000 ms 00:18:08.791 00:18:08.791 NVM Command Set Attributes 00:18:08.791 ========================== 00:18:08.791 Submission Queue Entry Size 00:18:08.791 Max: 64 00:18:08.791 Min: 64 00:18:08.791 Completion Queue Entry Size 00:18:08.791 Max: 16 00:18:08.791 Min: 16 00:18:08.791 Number of Namespaces: 32 00:18:08.791 Compare Command: Supported 00:18:08.791 Write Uncorrectable Command: Not Supported 00:18:08.791 Dataset Management Command: Supported 00:18:08.791 Write Zeroes Command: Supported 00:18:08.791 Set Features Save Field: Not Supported 00:18:08.791 Reservations: Not Supported 00:18:08.791 Timestamp: Not Supported 00:18:08.791 Copy: Supported 00:18:08.791 Volatile Write Cache: Present 00:18:08.791 Atomic Write Unit (Normal): 1 00:18:08.791 Atomic Write Unit (PFail): 1 00:18:08.791 Atomic Compare & Write Unit: 1 00:18:08.791 Fused Compare & Write: Supported 00:18:08.791 Scatter-Gather List 00:18:08.791 SGL Command Set: Supported (Dword aligned) 00:18:08.791 SGL Keyed: Not Supported 00:18:08.791 SGL Bit Bucket Descriptor: Not Supported 00:18:08.791 SGL Metadata Pointer: Not Supported 00:18:08.791 Oversized SGL: Not Supported 00:18:08.791 SGL Metadata Address: Not Supported 00:18:08.791 SGL Offset: Not Supported 00:18:08.791 Transport SGL Data Block: Not Supported 00:18:08.791 Replay Protected Memory Block: Not Supported 00:18:08.791 00:18:08.791 Firmware Slot Information 00:18:08.791 ========================= 00:18:08.791 Active slot: 1 00:18:08.791 Slot 1 Firmware Revision: 25.01 00:18:08.791 00:18:08.791 00:18:08.791 Commands Supported and Effects 00:18:08.791 ============================== 00:18:08.791 Admin Commands 00:18:08.791 -------------- 00:18:08.791 Get Log Page (02h): Supported 00:18:08.791 Identify (06h): Supported 00:18:08.791 Abort (08h): Supported 00:18:08.791 Set Features (09h): Supported 00:18:08.791 Get Features (0Ah): Supported 00:18:08.791 Asynchronous Event Request (0Ch): Supported 00:18:08.791 Keep Alive (18h): Supported 00:18:08.791 I/O Commands 00:18:08.791 ------------ 00:18:08.791 Flush (00h): Supported LBA-Change 00:18:08.791 Write (01h): Supported LBA-Change 00:18:08.791 Read (02h): Supported 00:18:08.791 Compare (05h): Supported 00:18:08.791 Write Zeroes (08h): Supported LBA-Change 00:18:08.791 Dataset Management (09h): Supported LBA-Change 00:18:08.791 Copy (19h): Supported LBA-Change 00:18:08.791 00:18:08.791 Error Log 00:18:08.791 ========= 00:18:08.791 00:18:08.791 Arbitration 00:18:08.791 =========== 00:18:08.791 Arbitration Burst: 1 00:18:08.791 00:18:08.791 Power Management 00:18:08.791 ================ 00:18:08.791 Number of Power States: 1 00:18:08.791 Current Power State: Power State #0 00:18:08.791 Power State #0: 00:18:08.791 Max Power: 0.00 W 00:18:08.791 Non-Operational State: Operational 00:18:08.791 Entry Latency: Not Reported 00:18:08.791 Exit Latency: Not Reported 00:18:08.791 Relative Read Throughput: 0 00:18:08.791 Relative Read Latency: 0 00:18:08.792 Relative Write Throughput: 0 00:18:08.792 Relative Write Latency: 0 00:18:08.792 Idle Power: Not Reported 00:18:08.792 Active Power: Not Reported 00:18:08.792 Non-Operational Permissive Mode: Not Supported 00:18:08.792 00:18:08.792 Health Information 00:18:08.792 ================== 00:18:08.792 Critical Warnings: 00:18:08.792 Available Spare Space: OK 00:18:08.792 Temperature: OK 00:18:08.792 Device Reliability: OK 00:18:08.792 Read Only: No 00:18:08.792 Volatile Memory Backup: OK 00:18:08.792 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:08.792 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:08.792 Available Spare: 0% 00:18:08.792 Available Sp[2024-10-14 17:35:07.875719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:08.792 [2024-10-14 17:35:07.883606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:08.792 [2024-10-14 17:35:07.883635] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:08.792 [2024-10-14 17:35:07.883643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.792 [2024-10-14 17:35:07.883649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.792 [2024-10-14 17:35:07.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.792 [2024-10-14 17:35:07.883659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.792 [2024-10-14 17:35:07.883701] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:08.792 [2024-10-14 17:35:07.883712] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:08.792 [2024-10-14 17:35:07.884704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:08.792 [2024-10-14 17:35:07.884748] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:08.792 [2024-10-14 17:35:07.884754] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:08.792 [2024-10-14 17:35:07.885714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:08.792 [2024-10-14 17:35:07.885725] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:08.792 [2024-10-14 17:35:07.885774] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:08.792 [2024-10-14 17:35:07.886731] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:08.792 are Threshold: 0% 00:18:08.792 Life Percentage Used: 0% 00:18:08.792 Data Units Read: 0 00:18:08.792 Data Units Written: 0 00:18:08.792 Host Read Commands: 0 00:18:08.792 Host Write Commands: 0 00:18:08.792 Controller Busy Time: 0 minutes 00:18:08.792 Power Cycles: 0 00:18:08.792 Power On Hours: 0 hours 00:18:08.792 Unsafe Shutdowns: 0 00:18:08.792 Unrecoverable Media Errors: 0 00:18:08.792 Lifetime Error Log Entries: 0 00:18:08.792 Warning Temperature Time: 0 minutes 00:18:08.792 Critical Temperature Time: 0 minutes 00:18:08.792 00:18:08.792 Number of Queues 00:18:08.792 ================ 00:18:08.792 Number of I/O Submission Queues: 127 00:18:08.792 Number of I/O Completion Queues: 127 00:18:08.792 00:18:08.792 Active Namespaces 00:18:08.792 ================= 00:18:08.792 Namespace ID:1 00:18:08.792 Error Recovery Timeout: Unlimited 00:18:08.792 Command Set Identifier: NVM (00h) 00:18:08.792 Deallocate: Supported 00:18:08.792 Deallocated/Unwritten Error: Not Supported 00:18:08.792 Deallocated Read Value: Unknown 00:18:08.792 Deallocate in Write Zeroes: Not Supported 00:18:08.792 Deallocated Guard Field: 0xFFFF 00:18:08.792 Flush: Supported 00:18:08.792 Reservation: Supported 00:18:08.792 Namespace Sharing Capabilities: Multiple Controllers 00:18:08.792 Size (in LBAs): 131072 (0GiB) 00:18:08.792 Capacity (in LBAs): 131072 (0GiB) 00:18:08.792 Utilization (in LBAs): 131072 (0GiB) 00:18:08.792 NGUID: 955921D198034A13A2A77CAFB1E3B368 00:18:08.792 UUID: 955921d1-9803-4a13-a2a7-7cafb1e3b368 00:18:08.792 Thin Provisioning: Not Supported 00:18:08.792 Per-NS Atomic Units: Yes 00:18:08.792 Atomic Boundary Size (Normal): 0 00:18:08.792 Atomic Boundary Size (PFail): 0 00:18:08.792 Atomic Boundary Offset: 0 00:18:08.792 Maximum Single Source Range Length: 65535 00:18:08.792 Maximum Copy Length: 65535 00:18:08.792 Maximum Source Range Count: 1 00:18:08.792 NGUID/EUI64 Never Reused: No 00:18:08.792 Namespace Write Protected: No 00:18:08.792 Number of LBA Formats: 1 00:18:08.792 Current LBA Format: LBA Format #00 00:18:08.792 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:08.792 00:18:08.792 17:35:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:09.050 [2024-10-14 17:35:08.104958] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:14.321 Initializing NVMe Controllers 00:18:14.321 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:14.321 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:14.321 Initialization complete. Launching workers. 00:18:14.321 ======================================================== 00:18:14.321 Latency(us) 00:18:14.321 Device Information : IOPS MiB/s Average min max 00:18:14.321 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.37 156.08 3203.32 951.80 6660.69 00:18:14.321 ======================================================== 00:18:14.321 Total : 39956.37 156.08 3203.32 951.80 6660.69 00:18:14.321 00:18:14.321 [2024-10-14 17:35:13.209853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:14.321 17:35:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:14.321 [2024-10-14 17:35:13.435532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.594 Initializing NVMe Controllers 00:18:19.594 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:19.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:19.594 Initialization complete. Launching workers. 00:18:19.594 ======================================================== 00:18:19.594 Latency(us) 00:18:19.594 Device Information : IOPS MiB/s Average min max 00:18:19.594 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39919.18 155.93 3206.30 939.68 10623.27 00:18:19.594 ======================================================== 00:18:19.594 Total : 39919.18 155.93 3206.30 939.68 10623.27 00:18:19.594 00:18:19.594 [2024-10-14 17:35:18.454480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.594 17:35:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:19.594 [2024-10-14 17:35:18.648660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:24.869 [2024-10-14 17:35:23.793693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:24.869 Initializing NVMe Controllers 00:18:24.869 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:24.869 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:24.869 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:24.869 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:24.869 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:24.869 Initialization complete. Launching workers. 00:18:24.869 Starting thread on core 2 00:18:24.869 Starting thread on core 3 00:18:24.869 Starting thread on core 1 00:18:24.869 17:35:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:25.128 [2024-10-14 17:35:24.073664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:28.420 [2024-10-14 17:35:27.555805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:28.680 Initializing NVMe Controllers 00:18:28.680 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.680 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:28.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:28.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:28.680 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:28.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:28.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:28.680 Initialization complete. Launching workers. 00:18:28.680 Starting thread on core 1 with urgent priority queue 00:18:28.680 Starting thread on core 2 with urgent priority queue 00:18:28.680 Starting thread on core 3 with urgent priority queue 00:18:28.680 Starting thread on core 0 with urgent priority queue 00:18:28.680 SPDK bdev Controller (SPDK2 ) core 0: 4573.33 IO/s 21.87 secs/100000 ios 00:18:28.680 SPDK bdev Controller (SPDK2 ) core 1: 6983.67 IO/s 14.32 secs/100000 ios 00:18:28.680 SPDK bdev Controller (SPDK2 ) core 2: 3915.67 IO/s 25.54 secs/100000 ios 00:18:28.680 SPDK bdev Controller (SPDK2 ) core 3: 4644.67 IO/s 21.53 secs/100000 ios 00:18:28.680 ======================================================== 00:18:28.680 00:18:28.680 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:28.946 [2024-10-14 17:35:27.823071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:28.946 Initializing NVMe Controllers 00:18:28.946 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.946 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.946 Namespace ID: 1 size: 0GB 00:18:28.946 Initialization complete. 00:18:28.946 INFO: using host memory buffer for IO 00:18:28.946 Hello world! 00:18:28.946 [2024-10-14 17:35:27.835164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:28.946 17:35:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.205 [2024-10-14 17:35:28.093505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.143 Initializing NVMe Controllers 00:18:30.143 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.143 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.143 Initialization complete. Launching workers. 00:18:30.143 submit (in ns) avg, min, max = 6287.0, 3185.7, 3998978.1 00:18:30.143 complete (in ns) avg, min, max = 19662.8, 1766.7, 4020391.4 00:18:30.143 00:18:30.143 Submit histogram 00:18:30.143 ================ 00:18:30.143 Range in us Cumulative Count 00:18:30.143 3.185 - 3.200: 0.1266% ( 21) 00:18:30.143 3.200 - 3.215: 0.6811% ( 92) 00:18:30.143 3.215 - 3.230: 1.9228% ( 206) 00:18:30.143 3.230 - 3.246: 3.9120% ( 330) 00:18:30.143 3.246 - 3.261: 7.1911% ( 544) 00:18:30.143 3.261 - 3.276: 12.5558% ( 890) 00:18:30.143 3.276 - 3.291: 18.5714% ( 998) 00:18:30.143 3.291 - 3.307: 24.7920% ( 1032) 00:18:30.143 3.307 - 3.322: 30.9765% ( 1026) 00:18:30.143 3.322 - 3.337: 37.1308% ( 1021) 00:18:30.143 3.337 - 3.352: 42.6763% ( 920) 00:18:30.143 3.352 - 3.368: 48.4207% ( 953) 00:18:30.143 3.368 - 3.383: 54.5871% ( 1023) 00:18:30.143 3.383 - 3.398: 59.6263% ( 836) 00:18:30.143 3.398 - 3.413: 66.4376% ( 1130) 00:18:30.143 3.413 - 3.429: 73.0802% ( 1102) 00:18:30.143 3.429 - 3.444: 77.5829% ( 747) 00:18:30.143 3.444 - 3.459: 82.0555% ( 742) 00:18:30.143 3.459 - 3.474: 84.7619% ( 449) 00:18:30.143 3.474 - 3.490: 86.5099% ( 290) 00:18:30.143 3.490 - 3.505: 87.6733% ( 193) 00:18:30.143 3.505 - 3.520: 88.3183% ( 107) 00:18:30.143 3.520 - 3.535: 88.8246% ( 84) 00:18:30.143 3.535 - 3.550: 89.2706% ( 74) 00:18:30.143 3.550 - 3.566: 89.9578% ( 114) 00:18:30.143 3.566 - 3.581: 90.7655% ( 134) 00:18:30.143 3.581 - 3.596: 91.6757% ( 151) 00:18:30.143 3.596 - 3.611: 92.5437% ( 144) 00:18:30.143 3.611 - 3.627: 93.5322% ( 164) 00:18:30.143 3.627 - 3.642: 94.3822% ( 141) 00:18:30.143 3.642 - 3.657: 95.2743% ( 148) 00:18:30.143 3.657 - 3.672: 96.1543% ( 146) 00:18:30.143 3.672 - 3.688: 96.9078% ( 125) 00:18:30.143 3.688 - 3.703: 97.5286% ( 103) 00:18:30.143 3.703 - 3.718: 98.1796% ( 108) 00:18:30.143 3.718 - 3.733: 98.5172% ( 56) 00:18:30.143 3.733 - 3.749: 98.8125% ( 49) 00:18:30.143 3.749 - 3.764: 98.9934% ( 30) 00:18:30.143 3.764 - 3.779: 99.2345% ( 40) 00:18:30.143 3.779 - 3.794: 99.3972% ( 27) 00:18:30.143 3.794 - 3.810: 99.4635% ( 11) 00:18:30.143 3.810 - 3.825: 99.5298% ( 11) 00:18:30.144 3.825 - 3.840: 99.5600% ( 5) 00:18:30.144 3.840 - 3.855: 99.5961% ( 6) 00:18:30.144 3.855 - 3.870: 99.6022% ( 1) 00:18:30.144 3.870 - 3.886: 99.6142% ( 2) 00:18:30.144 4.206 - 4.236: 99.6203% ( 1) 00:18:30.144 4.937 - 4.968: 99.6323% ( 2) 00:18:30.144 4.998 - 5.029: 99.6383% ( 1) 00:18:30.144 5.029 - 5.059: 99.6444% ( 1) 00:18:30.144 5.090 - 5.120: 99.6504% ( 1) 00:18:30.144 5.120 - 5.150: 99.6564% ( 1) 00:18:30.144 5.150 - 5.181: 99.6624% ( 1) 00:18:30.144 5.303 - 5.333: 99.6685% ( 1) 00:18:30.144 5.394 - 5.425: 99.6805% ( 2) 00:18:30.144 5.425 - 5.455: 99.6866% ( 1) 00:18:30.144 5.455 - 5.486: 99.6926% ( 1) 00:18:30.144 5.486 - 5.516: 99.6986% ( 1) 00:18:30.144 5.516 - 5.547: 99.7046% ( 1) 00:18:30.144 5.608 - 5.638: 99.7107% ( 1) 00:18:30.144 5.760 - 5.790: 99.7167% ( 1) 00:18:30.144 5.821 - 5.851: 99.7227% ( 1) 00:18:30.144 5.851 - 5.882: 99.7288% ( 1) 00:18:30.144 5.882 - 5.912: 99.7408% ( 2) 00:18:30.144 5.973 - 6.004: 99.7529% ( 2) 00:18:30.144 6.004 - 6.034: 99.7589% ( 1) 00:18:30.144 6.187 - 6.217: 99.7649% ( 1) 00:18:30.144 6.248 - 6.278: 99.7770% ( 2) 00:18:30.144 6.278 - 6.309: 99.7830% ( 1) 00:18:30.144 6.339 - 6.370: 99.7890% ( 1) 00:18:30.144 6.400 - 6.430: 99.8011% ( 2) 00:18:30.144 6.552 - 6.583: 99.8071% ( 1) 00:18:30.144 6.644 - 6.674: 99.8131% ( 1) 00:18:30.144 6.674 - 6.705: 99.8192% ( 1) 00:18:30.144 6.735 - 6.766: 99.8312% ( 2) 00:18:30.144 6.796 - 6.827: 99.8373% ( 1) 00:18:30.144 6.857 - 6.888: 99.8433% ( 1) 00:18:30.144 6.979 - 7.010: 99.8493% ( 1) 00:18:30.144 7.040 - 7.070: 99.8553% ( 1) 00:18:30.144 [2024-10-14 17:35:29.186592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.144 7.070 - 7.101: 99.8614% ( 1) 00:18:30.144 7.131 - 7.162: 99.8674% ( 1) 00:18:30.144 7.345 - 7.375: 99.8734% ( 1) 00:18:30.144 7.436 - 7.467: 99.8794% ( 1) 00:18:30.144 7.467 - 7.497: 99.8855% ( 1) 00:18:30.144 7.497 - 7.528: 99.8975% ( 2) 00:18:30.144 7.802 - 7.863: 99.9096% ( 2) 00:18:30.144 8.046 - 8.107: 99.9156% ( 1) 00:18:30.144 8.594 - 8.655: 99.9216% ( 1) 00:18:30.144 11.886 - 11.947: 99.9277% ( 1) 00:18:30.144 3994.575 - 4025.783: 100.0000% ( 12) 00:18:30.144 00:18:30.144 Complete histogram 00:18:30.144 ================== 00:18:30.144 Range in us Cumulative Count 00:18:30.144 1.760 - 1.768: 0.0060% ( 1) 00:18:30.144 1.768 - 1.775: 0.1025% ( 16) 00:18:30.144 1.775 - 1.783: 0.4521% ( 58) 00:18:30.144 1.783 - 1.790: 1.0609% ( 101) 00:18:30.144 1.790 - 1.798: 1.9289% ( 144) 00:18:30.144 1.798 - 1.806: 2.9054% ( 162) 00:18:30.144 1.806 - 1.813: 5.0332% ( 353) 00:18:30.144 1.813 - 1.821: 18.4388% ( 2224) 00:18:30.144 1.821 - 1.829: 50.3134% ( 5288) 00:18:30.144 1.829 - 1.836: 77.6371% ( 4533) 00:18:30.144 1.836 - 1.844: 88.7281% ( 1840) 00:18:30.144 1.844 - 1.851: 92.5256% ( 630) 00:18:30.144 1.851 - 1.859: 94.8222% ( 381) 00:18:30.144 1.859 - 1.867: 95.9132% ( 181) 00:18:30.144 1.867 - 1.874: 96.3894% ( 79) 00:18:30.144 1.874 - 1.882: 96.7028% ( 52) 00:18:30.144 1.882 - 1.890: 97.0826% ( 63) 00:18:30.144 1.890 - 1.897: 97.6492% ( 94) 00:18:30.144 1.897 - 1.905: 98.1435% ( 82) 00:18:30.144 1.905 - 1.912: 98.6739% ( 88) 00:18:30.144 1.912 - 1.920: 99.0175% ( 57) 00:18:30.144 1.920 - 1.928: 99.1501% ( 22) 00:18:30.144 1.928 - 1.935: 99.2043% ( 9) 00:18:30.144 1.935 - 1.943: 99.2586% ( 9) 00:18:30.144 1.943 - 1.950: 99.2646% ( 1) 00:18:30.144 1.950 - 1.966: 99.2827% ( 3) 00:18:30.144 1.966 - 1.981: 99.3128% ( 5) 00:18:30.144 1.981 - 1.996: 99.3249% ( 2) 00:18:30.144 1.996 - 2.011: 99.3369% ( 2) 00:18:30.144 2.027 - 2.042: 99.3550% ( 3) 00:18:30.144 2.072 - 2.088: 99.3611% ( 1) 00:18:30.144 2.179 - 2.194: 99.3671% ( 1) 00:18:30.144 2.286 - 2.301: 99.3731% ( 1) 00:18:30.144 2.408 - 2.423: 99.3791% ( 1) 00:18:30.144 3.383 - 3.398: 99.3852% ( 1) 00:18:30.144 3.490 - 3.505: 99.3912% ( 1) 00:18:30.144 3.703 - 3.718: 99.3972% ( 1) 00:18:30.144 3.718 - 3.733: 99.4033% ( 1) 00:18:30.144 3.992 - 4.023: 99.4093% ( 1) 00:18:30.144 4.084 - 4.114: 99.4153% ( 1) 00:18:30.144 4.145 - 4.175: 99.4213% ( 1) 00:18:30.144 4.328 - 4.358: 99.4274% ( 1) 00:18:30.144 4.450 - 4.480: 99.4334% ( 1) 00:18:30.144 4.602 - 4.632: 99.4394% ( 1) 00:18:30.144 4.815 - 4.846: 99.4454% ( 1) 00:18:30.144 4.846 - 4.876: 99.4575% ( 2) 00:18:30.144 4.968 - 4.998: 99.4635% ( 1) 00:18:30.144 5.059 - 5.090: 99.4696% ( 1) 00:18:30.144 5.120 - 5.150: 99.4756% ( 1) 00:18:30.144 5.181 - 5.211: 99.4816% ( 1) 00:18:30.144 5.211 - 5.242: 99.4937% ( 2) 00:18:30.144 5.303 - 5.333: 99.4997% ( 1) 00:18:30.144 5.425 - 5.455: 99.5057% ( 1) 00:18:30.144 5.547 - 5.577: 99.5118% ( 1) 00:18:30.144 5.577 - 5.608: 99.5178% ( 1) 00:18:30.144 5.638 - 5.669: 99.5238% ( 1) 00:18:30.144 5.943 - 5.973: 99.5298% ( 1) 00:18:30.144 6.827 - 6.857: 99.5359% ( 1) 00:18:30.144 6.979 - 7.010: 99.5419% ( 1) 00:18:30.144 13.653 - 13.714: 99.5479% ( 1) 00:18:30.144 132.632 - 133.608: 99.5539% ( 1) 00:18:30.144 3994.575 - 4025.783: 100.0000% ( 74) 00:18:30.144 00:18:30.144 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:30.144 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:30.144 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:30.144 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:30.144 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:30.404 [ 00:18:30.404 { 00:18:30.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.404 "subtype": "Discovery", 00:18:30.404 "listen_addresses": [], 00:18:30.404 "allow_any_host": true, 00:18:30.404 "hosts": [] 00:18:30.404 }, 00:18:30.404 { 00:18:30.404 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.404 "subtype": "NVMe", 00:18:30.404 "listen_addresses": [ 00:18:30.404 { 00:18:30.404 "trtype": "VFIOUSER", 00:18:30.404 "adrfam": "IPv4", 00:18:30.404 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.404 "trsvcid": "0" 00:18:30.404 } 00:18:30.404 ], 00:18:30.404 "allow_any_host": true, 00:18:30.404 "hosts": [], 00:18:30.404 "serial_number": "SPDK1", 00:18:30.404 "model_number": "SPDK bdev Controller", 00:18:30.404 "max_namespaces": 32, 00:18:30.404 "min_cntlid": 1, 00:18:30.404 "max_cntlid": 65519, 00:18:30.404 "namespaces": [ 00:18:30.404 { 00:18:30.404 "nsid": 1, 00:18:30.404 "bdev_name": "Malloc1", 00:18:30.404 "name": "Malloc1", 00:18:30.404 "nguid": "CF8600A8B9DD4432B068F1ABE39ECEED", 00:18:30.404 "uuid": "cf8600a8-b9dd-4432-b068-f1abe39eceed" 00:18:30.404 }, 00:18:30.404 { 00:18:30.404 "nsid": 2, 00:18:30.404 "bdev_name": "Malloc3", 00:18:30.404 "name": "Malloc3", 00:18:30.404 "nguid": "4B7E4A66E8D448F29DAC4EFE7F1854A8", 00:18:30.404 "uuid": "4b7e4a66-e8d4-48f2-9dac-4efe7f1854a8" 00:18:30.404 } 00:18:30.404 ] 00:18:30.404 }, 00:18:30.404 { 00:18:30.404 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.404 "subtype": "NVMe", 00:18:30.404 "listen_addresses": [ 00:18:30.404 { 00:18:30.404 "trtype": "VFIOUSER", 00:18:30.404 "adrfam": "IPv4", 00:18:30.404 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.404 "trsvcid": "0" 00:18:30.404 } 00:18:30.404 ], 00:18:30.404 "allow_any_host": true, 00:18:30.404 "hosts": [], 00:18:30.404 "serial_number": "SPDK2", 00:18:30.404 "model_number": "SPDK bdev Controller", 00:18:30.404 "max_namespaces": 32, 00:18:30.404 "min_cntlid": 1, 00:18:30.404 "max_cntlid": 65519, 00:18:30.404 "namespaces": [ 00:18:30.404 { 00:18:30.404 "nsid": 1, 00:18:30.404 "bdev_name": "Malloc2", 00:18:30.404 "name": "Malloc2", 00:18:30.404 "nguid": "955921D198034A13A2A77CAFB1E3B368", 00:18:30.404 "uuid": "955921d1-9803-4a13-a2a7-7cafb1e3b368" 00:18:30.404 } 00:18:30.404 ] 00:18:30.404 } 00:18:30.404 ] 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1082405 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:30.404 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:30.663 [2024-10-14 17:35:29.551020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.663 Malloc4 00:18:30.663 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:30.923 [2024-10-14 17:35:29.815013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.923 17:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:30.923 Asynchronous Event Request test 00:18:30.923 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.923 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.923 Registering asynchronous event callbacks... 00:18:30.923 Starting namespace attribute notice tests for all controllers... 00:18:30.923 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:30.923 aer_cb - Changed Namespace 00:18:30.923 Cleaning up... 00:18:30.923 [ 00:18:30.923 { 00:18:30.923 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.923 "subtype": "Discovery", 00:18:30.923 "listen_addresses": [], 00:18:30.923 "allow_any_host": true, 00:18:30.923 "hosts": [] 00:18:30.923 }, 00:18:30.923 { 00:18:30.923 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.923 "subtype": "NVMe", 00:18:30.923 "listen_addresses": [ 00:18:30.923 { 00:18:30.923 "trtype": "VFIOUSER", 00:18:30.923 "adrfam": "IPv4", 00:18:30.923 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.923 "trsvcid": "0" 00:18:30.923 } 00:18:30.923 ], 00:18:30.923 "allow_any_host": true, 00:18:30.923 "hosts": [], 00:18:30.923 "serial_number": "SPDK1", 00:18:30.923 "model_number": "SPDK bdev Controller", 00:18:30.923 "max_namespaces": 32, 00:18:30.923 "min_cntlid": 1, 00:18:30.923 "max_cntlid": 65519, 00:18:30.923 "namespaces": [ 00:18:30.923 { 00:18:30.923 "nsid": 1, 00:18:30.923 "bdev_name": "Malloc1", 00:18:30.923 "name": "Malloc1", 00:18:30.923 "nguid": "CF8600A8B9DD4432B068F1ABE39ECEED", 00:18:30.923 "uuid": "cf8600a8-b9dd-4432-b068-f1abe39eceed" 00:18:30.923 }, 00:18:30.923 { 00:18:30.923 "nsid": 2, 00:18:30.923 "bdev_name": "Malloc3", 00:18:30.923 "name": "Malloc3", 00:18:30.923 "nguid": "4B7E4A66E8D448F29DAC4EFE7F1854A8", 00:18:30.923 "uuid": "4b7e4a66-e8d4-48f2-9dac-4efe7f1854a8" 00:18:30.923 } 00:18:30.923 ] 00:18:30.923 }, 00:18:30.923 { 00:18:30.923 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.923 "subtype": "NVMe", 00:18:30.923 "listen_addresses": [ 00:18:30.923 { 00:18:30.923 "trtype": "VFIOUSER", 00:18:30.923 "adrfam": "IPv4", 00:18:30.923 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.923 "trsvcid": "0" 00:18:30.923 } 00:18:30.923 ], 00:18:30.923 "allow_any_host": true, 00:18:30.923 "hosts": [], 00:18:30.923 "serial_number": "SPDK2", 00:18:30.923 "model_number": "SPDK bdev Controller", 00:18:30.923 "max_namespaces": 32, 00:18:30.923 "min_cntlid": 1, 00:18:30.923 "max_cntlid": 65519, 00:18:30.923 "namespaces": [ 00:18:30.923 { 00:18:30.923 "nsid": 1, 00:18:30.923 "bdev_name": "Malloc2", 00:18:30.923 "name": "Malloc2", 00:18:30.923 "nguid": "955921D198034A13A2A77CAFB1E3B368", 00:18:30.923 "uuid": "955921d1-9803-4a13-a2a7-7cafb1e3b368" 00:18:30.923 }, 00:18:30.923 { 00:18:30.923 "nsid": 2, 00:18:30.923 "bdev_name": "Malloc4", 00:18:30.923 "name": "Malloc4", 00:18:30.923 "nguid": "B1145A4E3C874630829BE29A7401FC45", 00:18:30.923 "uuid": "b1145a4e-3c87-4630-829b-e29a7401fc45" 00:18:30.923 } 00:18:30.923 ] 00:18:30.923 } 00:18:30.923 ] 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1082405 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1074785 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1074785 ']' 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1074785 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.923 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1074785 00:18:31.182 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.183 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.183 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1074785' 00:18:31.183 killing process with pid 1074785 00:18:31.183 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1074785 00:18:31.183 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1074785 00:18:31.442 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:31.442 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:31.442 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:31.442 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:31.442 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1082552 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1082552' 00:18:31.443 Process pid: 1082552 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1082552 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1082552 ']' 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.443 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:31.443 [2024-10-14 17:35:30.380549] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:31.443 [2024-10-14 17:35:30.381473] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:18:31.443 [2024-10-14 17:35:30.381512] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.443 [2024-10-14 17:35:30.451450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.443 [2024-10-14 17:35:30.496374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.443 [2024-10-14 17:35:30.496408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.443 [2024-10-14 17:35:30.496415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.443 [2024-10-14 17:35:30.496421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.443 [2024-10-14 17:35:30.496426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.443 [2024-10-14 17:35:30.498017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.443 [2024-10-14 17:35:30.498125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.443 [2024-10-14 17:35:30.498252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.443 [2024-10-14 17:35:30.498253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.443 [2024-10-14 17:35:30.566840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:31.443 [2024-10-14 17:35:30.567561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:31.443 [2024-10-14 17:35:30.567969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:31.443 [2024-10-14 17:35:30.568573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:31.443 [2024-10-14 17:35:30.568609] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:31.701 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.701 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:31.701 17:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:32.639 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:32.899 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:32.899 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:32.899 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:32.899 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:32.899 17:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:32.899 Malloc1 00:18:32.899 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:33.158 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:33.417 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:33.676 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.676 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:33.676 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:33.676 Malloc2 00:18:33.935 17:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:33.935 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:34.194 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1082552 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1082552 ']' 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1082552 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1082552 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1082552' 00:18:34.453 killing process with pid 1082552 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1082552 00:18:34.453 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1082552 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:34.712 00:18:34.712 real 0m51.026s 00:18:34.712 user 3m17.308s 00:18:34.712 sys 0m3.375s 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:34.712 ************************************ 00:18:34.712 END TEST nvmf_vfio_user 00:18:34.712 ************************************ 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.712 ************************************ 00:18:34.712 START TEST nvmf_vfio_user_nvme_compliance 00:18:34.712 ************************************ 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:34.712 * Looking for test storage... 00:18:34.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:34.712 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:34.971 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.972 --rc genhtml_branch_coverage=1 00:18:34.972 --rc genhtml_function_coverage=1 00:18:34.972 --rc genhtml_legend=1 00:18:34.972 --rc geninfo_all_blocks=1 00:18:34.972 --rc geninfo_unexecuted_blocks=1 00:18:34.972 00:18:34.972 ' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.972 --rc genhtml_branch_coverage=1 00:18:34.972 --rc genhtml_function_coverage=1 00:18:34.972 --rc genhtml_legend=1 00:18:34.972 --rc geninfo_all_blocks=1 00:18:34.972 --rc geninfo_unexecuted_blocks=1 00:18:34.972 00:18:34.972 ' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.972 --rc genhtml_branch_coverage=1 00:18:34.972 --rc genhtml_function_coverage=1 00:18:34.972 --rc genhtml_legend=1 00:18:34.972 --rc geninfo_all_blocks=1 00:18:34.972 --rc geninfo_unexecuted_blocks=1 00:18:34.972 00:18:34.972 ' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:34.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.972 --rc genhtml_branch_coverage=1 00:18:34.972 --rc genhtml_function_coverage=1 00:18:34.972 --rc genhtml_legend=1 00:18:34.972 --rc geninfo_all_blocks=1 00:18:34.972 --rc geninfo_unexecuted_blocks=1 00:18:34.972 00:18:34.972 ' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.972 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1083192 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1083192' 00:18:34.973 Process pid: 1083192 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1083192 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1083192 ']' 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.973 17:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.973 [2024-10-14 17:35:34.017239] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:18:34.973 [2024-10-14 17:35:34.017288] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.973 [2024-10-14 17:35:34.084044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.232 [2024-10-14 17:35:34.126258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.232 [2024-10-14 17:35:34.126294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.232 [2024-10-14 17:35:34.126301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.232 [2024-10-14 17:35:34.126307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.232 [2024-10-14 17:35:34.126312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.232 [2024-10-14 17:35:34.127660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.232 [2024-10-14 17:35:34.127771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.232 [2024-10-14 17:35:34.127771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.232 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.232 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:35.232 17:35:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.170 malloc0 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.170 17:35:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:36.429 00:18:36.429 00:18:36.429 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.429 http://cunit.sourceforge.net/ 00:18:36.429 00:18:36.429 00:18:36.429 Suite: nvme_compliance 00:18:36.429 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-14 17:35:35.435882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.429 [2024-10-14 17:35:35.437231] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:36.429 [2024-10-14 17:35:35.437245] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:36.429 [2024-10-14 17:35:35.437251] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:36.429 [2024-10-14 17:35:35.438896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.429 passed 00:18:36.429 Test: admin_identify_ctrlr_verify_fused ...[2024-10-14 17:35:35.516438] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.429 [2024-10-14 17:35:35.519460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.429 passed 00:18:36.688 Test: admin_identify_ns ...[2024-10-14 17:35:35.598121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.688 [2024-10-14 17:35:35.658611] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:36.688 [2024-10-14 17:35:35.666616] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:36.688 [2024-10-14 17:35:35.687719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.688 passed 00:18:36.688 Test: admin_get_features_mandatory_features ...[2024-10-14 17:35:35.764507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.688 [2024-10-14 17:35:35.767528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.688 passed 00:18:36.948 Test: admin_get_features_optional_features ...[2024-10-14 17:35:35.846052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.948 [2024-10-14 17:35:35.849077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.948 passed 00:18:36.948 Test: admin_set_features_number_of_queues ...[2024-10-14 17:35:35.926794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.948 [2024-10-14 17:35:36.032686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.948 passed 00:18:37.207 Test: admin_get_log_page_mandatory_logs ...[2024-10-14 17:35:36.106392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.207 [2024-10-14 17:35:36.109408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.207 passed 00:18:37.207 Test: admin_get_log_page_with_lpo ...[2024-10-14 17:35:36.185088] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.207 [2024-10-14 17:35:36.252609] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:37.207 [2024-10-14 17:35:36.265681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.207 passed 00:18:37.207 Test: fabric_property_get ...[2024-10-14 17:35:36.341402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.207 [2024-10-14 17:35:36.342639] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:37.207 [2024-10-14 17:35:36.344425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.466 passed 00:18:37.466 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-14 17:35:36.421975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.466 [2024-10-14 17:35:36.423205] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:37.466 [2024-10-14 17:35:36.424995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.466 passed 00:18:37.466 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-14 17:35:36.502856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.466 [2024-10-14 17:35:36.586606] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:37.466 [2024-10-14 17:35:36.602614] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:37.725 [2024-10-14 17:35:36.607681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.725 passed 00:18:37.725 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-14 17:35:36.684454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.725 [2024-10-14 17:35:36.685697] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:37.725 [2024-10-14 17:35:36.687475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.725 passed 00:18:37.725 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-14 17:35:36.762164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.725 [2024-10-14 17:35:36.840609] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:37.725 [2024-10-14 17:35:36.864613] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:37.984 [2024-10-14 17:35:36.869683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.984 passed 00:18:37.984 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-14 17:35:36.943403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.984 [2024-10-14 17:35:36.944636] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:37.984 [2024-10-14 17:35:36.944659] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:37.984 [2024-10-14 17:35:36.946425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:37.984 passed 00:18:37.984 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-14 17:35:37.022876] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.984 [2024-10-14 17:35:37.114616] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:37.984 [2024-10-14 17:35:37.122611] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:38.243 [2024-10-14 17:35:37.130607] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:38.243 [2024-10-14 17:35:37.138607] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:38.243 [2024-10-14 17:35:37.167691] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.243 passed 00:18:38.243 Test: admin_create_io_sq_verify_pc ...[2024-10-14 17:35:37.245318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.243 [2024-10-14 17:35:37.263617] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:38.243 [2024-10-14 17:35:37.281507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.243 passed 00:18:38.243 Test: admin_create_io_qp_max_qps ...[2024-10-14 17:35:37.357036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.622 [2024-10-14 17:35:38.461609] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:39.881 [2024-10-14 17:35:38.843752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.881 passed 00:18:39.881 Test: admin_create_io_sq_shared_cq ...[2024-10-14 17:35:38.921761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.141 [2024-10-14 17:35:39.054610] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:40.141 [2024-10-14 17:35:39.091678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.141 passed 00:18:40.141 00:18:40.141 Run Summary: Type Total Ran Passed Failed Inactive 00:18:40.141 suites 1 1 n/a 0 0 00:18:40.141 tests 18 18 18 0 0 00:18:40.141 asserts 360 360 360 0 n/a 00:18:40.141 00:18:40.141 Elapsed time = 1.499 seconds 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1083192 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1083192 ']' 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1083192 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083192 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083192' 00:18:40.141 killing process with pid 1083192 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1083192 00:18:40.141 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1083192 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:40.401 00:18:40.401 real 0m5.605s 00:18:40.401 user 0m15.710s 00:18:40.401 sys 0m0.523s 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 ************************************ 00:18:40.401 END TEST nvmf_vfio_user_nvme_compliance 00:18:40.401 ************************************ 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.401 ************************************ 00:18:40.401 START TEST nvmf_vfio_user_fuzz 00:18:40.401 ************************************ 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:40.401 * Looking for test storage... 00:18:40.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:18:40.401 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.661 --rc genhtml_branch_coverage=1 00:18:40.661 --rc genhtml_function_coverage=1 00:18:40.661 --rc genhtml_legend=1 00:18:40.661 --rc geninfo_all_blocks=1 00:18:40.661 --rc geninfo_unexecuted_blocks=1 00:18:40.661 00:18:40.661 ' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.661 --rc genhtml_branch_coverage=1 00:18:40.661 --rc genhtml_function_coverage=1 00:18:40.661 --rc genhtml_legend=1 00:18:40.661 --rc geninfo_all_blocks=1 00:18:40.661 --rc geninfo_unexecuted_blocks=1 00:18:40.661 00:18:40.661 ' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.661 --rc genhtml_branch_coverage=1 00:18:40.661 --rc genhtml_function_coverage=1 00:18:40.661 --rc genhtml_legend=1 00:18:40.661 --rc geninfo_all_blocks=1 00:18:40.661 --rc geninfo_unexecuted_blocks=1 00:18:40.661 00:18:40.661 ' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:40.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.661 --rc genhtml_branch_coverage=1 00:18:40.661 --rc genhtml_function_coverage=1 00:18:40.661 --rc genhtml_legend=1 00:18:40.661 --rc geninfo_all_blocks=1 00:18:40.661 --rc geninfo_unexecuted_blocks=1 00:18:40.661 00:18:40.661 ' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.661 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1084175 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1084175' 00:18:40.662 Process pid: 1084175 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1084175 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1084175 ']' 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.662 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.921 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.921 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:40.921 17:35:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.857 malloc0 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:41.857 17:35:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:13.947 Fuzzing completed. Shutting down the fuzz application 00:19:13.947 00:19:13.947 Dumping successful admin opcodes: 00:19:13.947 8, 9, 10, 24, 00:19:13.947 Dumping successful io opcodes: 00:19:13.947 0, 00:19:13.947 NS: 0x20000081ef00 I/O qp, Total commands completed: 1160302, total successful commands: 4566, random_seed: 838717888 00:19:13.948 NS: 0x20000081ef00 admin qp, Total commands completed: 287656, total successful commands: 2320, random_seed: 4268756928 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1084175 ']' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084175' 00:19:13.948 killing process with pid 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1084175 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:13.948 00:19:13.948 real 0m32.192s 00:19:13.948 user 0m34.536s 00:19:13.948 sys 0m26.813s 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.948 ************************************ 00:19:13.948 END TEST nvmf_vfio_user_fuzz 00:19:13.948 ************************************ 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.948 ************************************ 00:19:13.948 START TEST nvmf_auth_target 00:19:13.948 ************************************ 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:13.948 * Looking for test storage... 00:19:13.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.948 --rc genhtml_branch_coverage=1 00:19:13.948 --rc genhtml_function_coverage=1 00:19:13.948 --rc genhtml_legend=1 00:19:13.948 --rc geninfo_all_blocks=1 00:19:13.948 --rc geninfo_unexecuted_blocks=1 00:19:13.948 00:19:13.948 ' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.948 --rc genhtml_branch_coverage=1 00:19:13.948 --rc genhtml_function_coverage=1 00:19:13.948 --rc genhtml_legend=1 00:19:13.948 --rc geninfo_all_blocks=1 00:19:13.948 --rc geninfo_unexecuted_blocks=1 00:19:13.948 00:19:13.948 ' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.948 --rc genhtml_branch_coverage=1 00:19:13.948 --rc genhtml_function_coverage=1 00:19:13.948 --rc genhtml_legend=1 00:19:13.948 --rc geninfo_all_blocks=1 00:19:13.948 --rc geninfo_unexecuted_blocks=1 00:19:13.948 00:19:13.948 ' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:13.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.948 --rc genhtml_branch_coverage=1 00:19:13.948 --rc genhtml_function_coverage=1 00:19:13.948 --rc genhtml_legend=1 00:19:13.948 --rc geninfo_all_blocks=1 00:19:13.948 --rc geninfo_unexecuted_blocks=1 00:19:13.948 00:19:13.948 ' 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.948 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.949 17:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:19.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:19.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:19.230 Found net devices under 0000:86:00.0: cvl_0_0 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:19.230 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:19.231 Found net devices under 0000:86:00.1: cvl_0_1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:19:19.231 00:19:19.231 --- 10.0.0.2 ping statistics --- 00:19:19.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.231 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:19.231 00:19:19.231 --- 10.0.0.1 ping statistics --- 00:19:19.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.231 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1092992 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1092992 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1092992 ']' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.231 17:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1093093 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=197764f10ed1ffd4e30154ec6c20dc051ce51bfdebf12fcd 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.hOa 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 197764f10ed1ffd4e30154ec6c20dc051ce51bfdebf12fcd 0 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 197764f10ed1ffd4e30154ec6c20dc051ce51bfdebf12fcd 0 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=197764f10ed1ffd4e30154ec6c20dc051ce51bfdebf12fcd 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.hOa 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.hOa 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.hOa 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=7b2e1c8540e2700fcda0e45f2dcba4fa05e9d9a892f45a437552ea33b7911981 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xTc 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 7b2e1c8540e2700fcda0e45f2dcba4fa05e9d9a892f45a437552ea33b7911981 3 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 7b2e1c8540e2700fcda0e45f2dcba4fa05e9d9a892f45a437552ea33b7911981 3 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=7b2e1c8540e2700fcda0e45f2dcba4fa05e9d9a892f45a437552ea33b7911981 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.231 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xTc 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xTc 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xTc 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9105f7e73aed08332ed4eb78d5957cc1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.5HZ 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9105f7e73aed08332ed4eb78d5957cc1 1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9105f7e73aed08332ed4eb78d5957cc1 1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9105f7e73aed08332ed4eb78d5957cc1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.5HZ 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.5HZ 00:19:19.232 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5HZ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=535261a97f33fcd522462124b407336c62e6b66f817c00d9 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.bAZ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 535261a97f33fcd522462124b407336c62e6b66f817c00d9 2 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 535261a97f33fcd522462124b407336c62e6b66f817c00d9 2 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=535261a97f33fcd522462124b407336c62e6b66f817c00d9 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.bAZ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.bAZ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.bAZ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cf306e3816e490e88dfcddafa37b3453bde3bf2fd0f70179 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.nNQ 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cf306e3816e490e88dfcddafa37b3453bde3bf2fd0f70179 2 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cf306e3816e490e88dfcddafa37b3453bde3bf2fd0f70179 2 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.491 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cf306e3816e490e88dfcddafa37b3453bde3bf2fd0f70179 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.nNQ 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.nNQ 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nNQ 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=435d178516668f1fc3e5aa490a40b131 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.AFN 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 435d178516668f1fc3e5aa490a40b131 1 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 435d178516668f1fc3e5aa490a40b131 1 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=435d178516668f1fc3e5aa490a40b131 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.AFN 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.AFN 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.AFN 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=04dc1ea819c447c8ad2ba2daab39bf815dc405cc2a1818928b0229e40729bd2f 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.n7J 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 04dc1ea819c447c8ad2ba2daab39bf815dc405cc2a1818928b0229e40729bd2f 3 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 04dc1ea819c447c8ad2ba2daab39bf815dc405cc2a1818928b0229e40729bd2f 3 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=04dc1ea819c447c8ad2ba2daab39bf815dc405cc2a1818928b0229e40729bd2f 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.n7J 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.n7J 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.n7J 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1092992 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1092992 ']' 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.492 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1093093 /var/tmp/host.sock 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1093093 ']' 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:19.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.751 17:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hOa 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hOa 00:19:20.010 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hOa 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xTc ]] 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xTc 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xTc 00:19:20.269 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xTc 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5HZ 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5HZ 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5HZ 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.bAZ ]] 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bAZ 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bAZ 00:19:20.528 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bAZ 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nNQ 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nNQ 00:19:20.786 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nNQ 00:19:21.045 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.AFN ]] 00:19:21.045 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AFN 00:19:21.045 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.045 17:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.045 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.045 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AFN 00:19:21.045 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AFN 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.n7J 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.n7J 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.n7J 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.304 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.562 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.563 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.563 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.563 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.563 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.821 00:19:21.821 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.821 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.821 17:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.080 { 00:19:22.080 "cntlid": 1, 00:19:22.080 "qid": 0, 00:19:22.080 "state": "enabled", 00:19:22.080 "thread": "nvmf_tgt_poll_group_000", 00:19:22.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:22.080 "listen_address": { 00:19:22.080 "trtype": "TCP", 00:19:22.080 "adrfam": "IPv4", 00:19:22.080 "traddr": "10.0.0.2", 00:19:22.080 "trsvcid": "4420" 00:19:22.080 }, 00:19:22.080 "peer_address": { 00:19:22.080 "trtype": "TCP", 00:19:22.080 "adrfam": "IPv4", 00:19:22.080 "traddr": "10.0.0.1", 00:19:22.080 "trsvcid": "32892" 00:19:22.080 }, 00:19:22.080 "auth": { 00:19:22.080 "state": "completed", 00:19:22.080 "digest": "sha256", 00:19:22.080 "dhgroup": "null" 00:19:22.080 } 00:19:22.080 } 00:19:22.080 ]' 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.080 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.339 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:22.339 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.907 17:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.166 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.425 00:19:23.425 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.425 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.425 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.683 { 00:19:23.683 "cntlid": 3, 00:19:23.683 "qid": 0, 00:19:23.683 "state": "enabled", 00:19:23.683 "thread": "nvmf_tgt_poll_group_000", 00:19:23.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:23.683 "listen_address": { 00:19:23.683 "trtype": "TCP", 00:19:23.683 "adrfam": "IPv4", 00:19:23.683 "traddr": "10.0.0.2", 00:19:23.683 "trsvcid": "4420" 00:19:23.683 }, 00:19:23.683 "peer_address": { 00:19:23.683 "trtype": "TCP", 00:19:23.683 "adrfam": "IPv4", 00:19:23.683 "traddr": "10.0.0.1", 00:19:23.683 "trsvcid": "32916" 00:19:23.683 }, 00:19:23.683 "auth": { 00:19:23.683 "state": "completed", 00:19:23.683 "digest": "sha256", 00:19:23.683 "dhgroup": "null" 00:19:23.683 } 00:19:23.683 } 00:19:23.683 ]' 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.683 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.942 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:23.942 17:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.510 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.769 17:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.080 00:19:25.080 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.080 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.080 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.402 { 00:19:25.402 "cntlid": 5, 00:19:25.402 "qid": 0, 00:19:25.402 "state": "enabled", 00:19:25.402 "thread": "nvmf_tgt_poll_group_000", 00:19:25.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:25.402 "listen_address": { 00:19:25.402 "trtype": "TCP", 00:19:25.402 "adrfam": "IPv4", 00:19:25.402 "traddr": "10.0.0.2", 00:19:25.402 "trsvcid": "4420" 00:19:25.402 }, 00:19:25.402 "peer_address": { 00:19:25.402 "trtype": "TCP", 00:19:25.402 "adrfam": "IPv4", 00:19:25.402 "traddr": "10.0.0.1", 00:19:25.402 "trsvcid": "37852" 00:19:25.402 }, 00:19:25.402 "auth": { 00:19:25.402 "state": "completed", 00:19:25.402 "digest": "sha256", 00:19:25.402 "dhgroup": "null" 00:19:25.402 } 00:19:25.402 } 00:19:25.402 ]' 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.402 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.714 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:25.714 17:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.282 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.283 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.283 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.541 00:19:26.541 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.541 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.542 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.800 { 00:19:26.800 "cntlid": 7, 00:19:26.800 "qid": 0, 00:19:26.800 "state": "enabled", 00:19:26.800 "thread": "nvmf_tgt_poll_group_000", 00:19:26.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:26.800 "listen_address": { 00:19:26.800 "trtype": "TCP", 00:19:26.800 "adrfam": "IPv4", 00:19:26.800 "traddr": "10.0.0.2", 00:19:26.800 "trsvcid": "4420" 00:19:26.800 }, 00:19:26.800 "peer_address": { 00:19:26.800 "trtype": "TCP", 00:19:26.800 "adrfam": "IPv4", 00:19:26.800 "traddr": "10.0.0.1", 00:19:26.800 "trsvcid": "37878" 00:19:26.800 }, 00:19:26.800 "auth": { 00:19:26.800 "state": "completed", 00:19:26.800 "digest": "sha256", 00:19:26.800 "dhgroup": "null" 00:19:26.800 } 00:19:26.800 } 00:19:26.800 ]' 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.800 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.059 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:27.059 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.059 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.059 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.059 17:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.059 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:27.059 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.627 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.901 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.901 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.902 17:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.168 00:19:28.168 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.168 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.168 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.427 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.427 { 00:19:28.427 "cntlid": 9, 00:19:28.427 "qid": 0, 00:19:28.427 "state": "enabled", 00:19:28.427 "thread": "nvmf_tgt_poll_group_000", 00:19:28.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:28.427 "listen_address": { 00:19:28.427 "trtype": "TCP", 00:19:28.427 "adrfam": "IPv4", 00:19:28.427 "traddr": "10.0.0.2", 00:19:28.427 "trsvcid": "4420" 00:19:28.427 }, 00:19:28.427 "peer_address": { 00:19:28.427 "trtype": "TCP", 00:19:28.427 "adrfam": "IPv4", 00:19:28.428 "traddr": "10.0.0.1", 00:19:28.428 "trsvcid": "37904" 00:19:28.428 }, 00:19:28.428 "auth": { 00:19:28.428 "state": "completed", 00:19:28.428 "digest": "sha256", 00:19:28.428 "dhgroup": "ffdhe2048" 00:19:28.428 } 00:19:28.428 } 00:19:28.428 ]' 00:19:28.428 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.428 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.428 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.428 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.428 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.686 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.686 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.686 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.686 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:28.687 17:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.254 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.512 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.771 00:19:29.771 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.771 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.771 17:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.030 { 00:19:30.030 "cntlid": 11, 00:19:30.030 "qid": 0, 00:19:30.030 "state": "enabled", 00:19:30.030 "thread": "nvmf_tgt_poll_group_000", 00:19:30.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:30.030 "listen_address": { 00:19:30.030 "trtype": "TCP", 00:19:30.030 "adrfam": "IPv4", 00:19:30.030 "traddr": "10.0.0.2", 00:19:30.030 "trsvcid": "4420" 00:19:30.030 }, 00:19:30.030 "peer_address": { 00:19:30.030 "trtype": "TCP", 00:19:30.030 "adrfam": "IPv4", 00:19:30.030 "traddr": "10.0.0.1", 00:19:30.030 "trsvcid": "37932" 00:19:30.030 }, 00:19:30.030 "auth": { 00:19:30.030 "state": "completed", 00:19:30.030 "digest": "sha256", 00:19:30.030 "dhgroup": "ffdhe2048" 00:19:30.030 } 00:19:30.030 } 00:19:30.030 ]' 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.030 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.289 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.289 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.289 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.289 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:30.289 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.857 17:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.115 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.374 00:19:31.374 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.374 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.374 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.632 { 00:19:31.632 "cntlid": 13, 00:19:31.632 "qid": 0, 00:19:31.632 "state": "enabled", 00:19:31.632 "thread": "nvmf_tgt_poll_group_000", 00:19:31.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:31.632 "listen_address": { 00:19:31.632 "trtype": "TCP", 00:19:31.632 "adrfam": "IPv4", 00:19:31.632 "traddr": "10.0.0.2", 00:19:31.632 "trsvcid": "4420" 00:19:31.632 }, 00:19:31.632 "peer_address": { 00:19:31.632 "trtype": "TCP", 00:19:31.632 "adrfam": "IPv4", 00:19:31.632 "traddr": "10.0.0.1", 00:19:31.632 "trsvcid": "37956" 00:19:31.632 }, 00:19:31.632 "auth": { 00:19:31.632 "state": "completed", 00:19:31.632 "digest": "sha256", 00:19:31.632 "dhgroup": "ffdhe2048" 00:19:31.632 } 00:19:31.632 } 00:19:31.632 ]' 00:19:31.632 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.633 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.891 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:31.891 17:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.459 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.719 17:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.978 00:19:32.978 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.978 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.978 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.236 { 00:19:33.236 "cntlid": 15, 00:19:33.236 "qid": 0, 00:19:33.236 "state": "enabled", 00:19:33.236 "thread": "nvmf_tgt_poll_group_000", 00:19:33.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:33.236 "listen_address": { 00:19:33.236 "trtype": "TCP", 00:19:33.236 "adrfam": "IPv4", 00:19:33.236 "traddr": "10.0.0.2", 00:19:33.236 "trsvcid": "4420" 00:19:33.236 }, 00:19:33.236 "peer_address": { 00:19:33.236 "trtype": "TCP", 00:19:33.236 "adrfam": "IPv4", 00:19:33.236 "traddr": "10.0.0.1", 00:19:33.236 "trsvcid": "37982" 00:19:33.236 }, 00:19:33.236 "auth": { 00:19:33.236 "state": "completed", 00:19:33.236 "digest": "sha256", 00:19:33.236 "dhgroup": "ffdhe2048" 00:19:33.236 } 00:19:33.236 } 00:19:33.236 ]' 00:19:33.236 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.237 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.495 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:33.495 17:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.063 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.321 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.322 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.580 00:19:34.580 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.580 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.580 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.839 { 00:19:34.839 "cntlid": 17, 00:19:34.839 "qid": 0, 00:19:34.839 "state": "enabled", 00:19:34.839 "thread": "nvmf_tgt_poll_group_000", 00:19:34.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:34.839 "listen_address": { 00:19:34.839 "trtype": "TCP", 00:19:34.839 "adrfam": "IPv4", 00:19:34.839 "traddr": "10.0.0.2", 00:19:34.839 "trsvcid": "4420" 00:19:34.839 }, 00:19:34.839 "peer_address": { 00:19:34.839 "trtype": "TCP", 00:19:34.839 "adrfam": "IPv4", 00:19:34.839 "traddr": "10.0.0.1", 00:19:34.839 "trsvcid": "38012" 00:19:34.839 }, 00:19:34.839 "auth": { 00:19:34.839 "state": "completed", 00:19:34.839 "digest": "sha256", 00:19:34.839 "dhgroup": "ffdhe3072" 00:19:34.839 } 00:19:34.839 } 00:19:34.839 ]' 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.839 17:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.097 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:35.097 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.664 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.922 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.923 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.923 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.923 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.923 17:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.181 00:19:36.181 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.181 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.181 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.441 { 00:19:36.441 "cntlid": 19, 00:19:36.441 "qid": 0, 00:19:36.441 "state": "enabled", 00:19:36.441 "thread": "nvmf_tgt_poll_group_000", 00:19:36.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:36.441 "listen_address": { 00:19:36.441 "trtype": "TCP", 00:19:36.441 "adrfam": "IPv4", 00:19:36.441 "traddr": "10.0.0.2", 00:19:36.441 "trsvcid": "4420" 00:19:36.441 }, 00:19:36.441 "peer_address": { 00:19:36.441 "trtype": "TCP", 00:19:36.441 "adrfam": "IPv4", 00:19:36.441 "traddr": "10.0.0.1", 00:19:36.441 "trsvcid": "58406" 00:19:36.441 }, 00:19:36.441 "auth": { 00:19:36.441 "state": "completed", 00:19:36.441 "digest": "sha256", 00:19:36.441 "dhgroup": "ffdhe3072" 00:19:36.441 } 00:19:36.441 } 00:19:36.441 ]' 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.441 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.700 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:36.700 17:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.268 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.269 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.527 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:37.527 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.528 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.787 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.787 { 00:19:37.787 "cntlid": 21, 00:19:37.787 "qid": 0, 00:19:37.787 "state": "enabled", 00:19:37.787 "thread": "nvmf_tgt_poll_group_000", 00:19:37.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:37.787 "listen_address": { 00:19:37.787 "trtype": "TCP", 00:19:37.787 "adrfam": "IPv4", 00:19:37.787 "traddr": "10.0.0.2", 00:19:37.787 "trsvcid": "4420" 00:19:37.787 }, 00:19:37.787 "peer_address": { 00:19:37.787 "trtype": "TCP", 00:19:37.787 "adrfam": "IPv4", 00:19:37.787 "traddr": "10.0.0.1", 00:19:37.787 "trsvcid": "58432" 00:19:37.787 }, 00:19:37.787 "auth": { 00:19:37.787 "state": "completed", 00:19:37.787 "digest": "sha256", 00:19:37.787 "dhgroup": "ffdhe3072" 00:19:37.787 } 00:19:37.787 } 00:19:37.787 ]' 00:19:37.787 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.047 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.047 17:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.047 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.047 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.047 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.047 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.047 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.306 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:38.306 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.874 17:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.133 00:19:39.133 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.133 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.133 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.394 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.394 { 00:19:39.394 "cntlid": 23, 00:19:39.394 "qid": 0, 00:19:39.394 "state": "enabled", 00:19:39.394 "thread": "nvmf_tgt_poll_group_000", 00:19:39.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:39.394 "listen_address": { 00:19:39.394 "trtype": "TCP", 00:19:39.394 "adrfam": "IPv4", 00:19:39.394 "traddr": "10.0.0.2", 00:19:39.394 "trsvcid": "4420" 00:19:39.394 }, 00:19:39.394 "peer_address": { 00:19:39.394 "trtype": "TCP", 00:19:39.394 "adrfam": "IPv4", 00:19:39.394 "traddr": "10.0.0.1", 00:19:39.394 "trsvcid": "58454" 00:19:39.394 }, 00:19:39.394 "auth": { 00:19:39.394 "state": "completed", 00:19:39.394 "digest": "sha256", 00:19:39.395 "dhgroup": "ffdhe3072" 00:19:39.395 } 00:19:39.395 } 00:19:39.395 ]' 00:19:39.395 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.395 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.395 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.667 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.667 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.667 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.668 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.668 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.668 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:39.668 17:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.249 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.508 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.766 00:19:40.766 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.766 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.766 17:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.025 { 00:19:41.025 "cntlid": 25, 00:19:41.025 "qid": 0, 00:19:41.025 "state": "enabled", 00:19:41.025 "thread": "nvmf_tgt_poll_group_000", 00:19:41.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:41.025 "listen_address": { 00:19:41.025 "trtype": "TCP", 00:19:41.025 "adrfam": "IPv4", 00:19:41.025 "traddr": "10.0.0.2", 00:19:41.025 "trsvcid": "4420" 00:19:41.025 }, 00:19:41.025 "peer_address": { 00:19:41.025 "trtype": "TCP", 00:19:41.025 "adrfam": "IPv4", 00:19:41.025 "traddr": "10.0.0.1", 00:19:41.025 "trsvcid": "58478" 00:19:41.025 }, 00:19:41.025 "auth": { 00:19:41.025 "state": "completed", 00:19:41.025 "digest": "sha256", 00:19:41.025 "dhgroup": "ffdhe4096" 00:19:41.025 } 00:19:41.025 } 00:19:41.025 ]' 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:41.025 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.284 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.285 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.285 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.285 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:41.285 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.853 17:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.111 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.368 00:19:42.368 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.368 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.368 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.626 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.626 { 00:19:42.626 "cntlid": 27, 00:19:42.626 "qid": 0, 00:19:42.626 "state": "enabled", 00:19:42.626 "thread": "nvmf_tgt_poll_group_000", 00:19:42.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:42.626 "listen_address": { 00:19:42.626 "trtype": "TCP", 00:19:42.626 "adrfam": "IPv4", 00:19:42.626 "traddr": "10.0.0.2", 00:19:42.626 "trsvcid": "4420" 00:19:42.626 }, 00:19:42.626 "peer_address": { 00:19:42.626 "trtype": "TCP", 00:19:42.626 "adrfam": "IPv4", 00:19:42.626 "traddr": "10.0.0.1", 00:19:42.627 "trsvcid": "58504" 00:19:42.627 }, 00:19:42.627 "auth": { 00:19:42.627 "state": "completed", 00:19:42.627 "digest": "sha256", 00:19:42.627 "dhgroup": "ffdhe4096" 00:19:42.627 } 00:19:42.627 } 00:19:42.627 ]' 00:19:42.627 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.627 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.627 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.627 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.627 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.886 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.886 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.886 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.886 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:42.886 17:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.453 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.712 17:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.971 00:19:43.971 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.971 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.971 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.229 { 00:19:44.229 "cntlid": 29, 00:19:44.229 "qid": 0, 00:19:44.229 "state": "enabled", 00:19:44.229 "thread": "nvmf_tgt_poll_group_000", 00:19:44.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:44.229 "listen_address": { 00:19:44.229 "trtype": "TCP", 00:19:44.229 "adrfam": "IPv4", 00:19:44.229 "traddr": "10.0.0.2", 00:19:44.229 "trsvcid": "4420" 00:19:44.229 }, 00:19:44.229 "peer_address": { 00:19:44.229 "trtype": "TCP", 00:19:44.229 "adrfam": "IPv4", 00:19:44.229 "traddr": "10.0.0.1", 00:19:44.229 "trsvcid": "58540" 00:19:44.229 }, 00:19:44.229 "auth": { 00:19:44.229 "state": "completed", 00:19:44.229 "digest": "sha256", 00:19:44.229 "dhgroup": "ffdhe4096" 00:19:44.229 } 00:19:44.229 } 00:19:44.229 ]' 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.229 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.487 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.487 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.487 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.487 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:44.487 17:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.054 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.312 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.313 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.571 00:19:45.571 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.571 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.571 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.831 { 00:19:45.831 "cntlid": 31, 00:19:45.831 "qid": 0, 00:19:45.831 "state": "enabled", 00:19:45.831 "thread": "nvmf_tgt_poll_group_000", 00:19:45.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:45.831 "listen_address": { 00:19:45.831 "trtype": "TCP", 00:19:45.831 "adrfam": "IPv4", 00:19:45.831 "traddr": "10.0.0.2", 00:19:45.831 "trsvcid": "4420" 00:19:45.831 }, 00:19:45.831 "peer_address": { 00:19:45.831 "trtype": "TCP", 00:19:45.831 "adrfam": "IPv4", 00:19:45.831 "traddr": "10.0.0.1", 00:19:45.831 "trsvcid": "58672" 00:19:45.831 }, 00:19:45.831 "auth": { 00:19:45.831 "state": "completed", 00:19:45.831 "digest": "sha256", 00:19:45.831 "dhgroup": "ffdhe4096" 00:19:45.831 } 00:19:45.831 } 00:19:45.831 ]' 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.831 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.090 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.090 17:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.090 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.090 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:46.090 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.657 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.916 17:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.175 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.436 { 00:19:47.436 "cntlid": 33, 00:19:47.436 "qid": 0, 00:19:47.436 "state": "enabled", 00:19:47.436 "thread": "nvmf_tgt_poll_group_000", 00:19:47.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:47.436 "listen_address": { 00:19:47.436 "trtype": "TCP", 00:19:47.436 "adrfam": "IPv4", 00:19:47.436 "traddr": "10.0.0.2", 00:19:47.436 "trsvcid": "4420" 00:19:47.436 }, 00:19:47.436 "peer_address": { 00:19:47.436 "trtype": "TCP", 00:19:47.436 "adrfam": "IPv4", 00:19:47.436 "traddr": "10.0.0.1", 00:19:47.436 "trsvcid": "58694" 00:19:47.436 }, 00:19:47.436 "auth": { 00:19:47.436 "state": "completed", 00:19:47.436 "digest": "sha256", 00:19:47.436 "dhgroup": "ffdhe6144" 00:19:47.436 } 00:19:47.436 } 00:19:47.436 ]' 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.436 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.698 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.698 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.698 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.698 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.698 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.957 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:47.957 17:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.524 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.091 00:19:49.091 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.091 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.091 17:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.091 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.091 { 00:19:49.091 "cntlid": 35, 00:19:49.091 "qid": 0, 00:19:49.091 "state": "enabled", 00:19:49.091 "thread": "nvmf_tgt_poll_group_000", 00:19:49.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:49.091 "listen_address": { 00:19:49.091 "trtype": "TCP", 00:19:49.091 "adrfam": "IPv4", 00:19:49.092 "traddr": "10.0.0.2", 00:19:49.092 "trsvcid": "4420" 00:19:49.092 }, 00:19:49.092 "peer_address": { 00:19:49.092 "trtype": "TCP", 00:19:49.092 "adrfam": "IPv4", 00:19:49.092 "traddr": "10.0.0.1", 00:19:49.092 "trsvcid": "58722" 00:19:49.092 }, 00:19:49.092 "auth": { 00:19:49.092 "state": "completed", 00:19:49.092 "digest": "sha256", 00:19:49.092 "dhgroup": "ffdhe6144" 00:19:49.092 } 00:19:49.092 } 00:19:49.092 ]' 00:19:49.092 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.092 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.092 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:49.351 17:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.921 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.180 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.439 00:19:50.439 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.439 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.698 { 00:19:50.698 "cntlid": 37, 00:19:50.698 "qid": 0, 00:19:50.698 "state": "enabled", 00:19:50.698 "thread": "nvmf_tgt_poll_group_000", 00:19:50.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:50.698 "listen_address": { 00:19:50.698 "trtype": "TCP", 00:19:50.698 "adrfam": "IPv4", 00:19:50.698 "traddr": "10.0.0.2", 00:19:50.698 "trsvcid": "4420" 00:19:50.698 }, 00:19:50.698 "peer_address": { 00:19:50.698 "trtype": "TCP", 00:19:50.698 "adrfam": "IPv4", 00:19:50.698 "traddr": "10.0.0.1", 00:19:50.698 "trsvcid": "58758" 00:19:50.698 }, 00:19:50.698 "auth": { 00:19:50.698 "state": "completed", 00:19:50.698 "digest": "sha256", 00:19:50.698 "dhgroup": "ffdhe6144" 00:19:50.698 } 00:19:50.698 } 00:19:50.698 ]' 00:19:50.698 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.957 17:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.216 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:51.216 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.784 17:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.352 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.352 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.352 { 00:19:52.352 "cntlid": 39, 00:19:52.352 "qid": 0, 00:19:52.352 "state": "enabled", 00:19:52.352 "thread": "nvmf_tgt_poll_group_000", 00:19:52.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:52.352 "listen_address": { 00:19:52.352 "trtype": "TCP", 00:19:52.352 "adrfam": "IPv4", 00:19:52.352 "traddr": "10.0.0.2", 00:19:52.352 "trsvcid": "4420" 00:19:52.352 }, 00:19:52.352 "peer_address": { 00:19:52.352 "trtype": "TCP", 00:19:52.352 "adrfam": "IPv4", 00:19:52.352 "traddr": "10.0.0.1", 00:19:52.352 "trsvcid": "58784" 00:19:52.352 }, 00:19:52.352 "auth": { 00:19:52.352 "state": "completed", 00:19:52.352 "digest": "sha256", 00:19:52.352 "dhgroup": "ffdhe6144" 00:19:52.352 } 00:19:52.352 } 00:19:52.352 ]' 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.611 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.870 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:52.870 17:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.438 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.696 17:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.955 00:19:53.955 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.955 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.955 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.214 { 00:19:54.214 "cntlid": 41, 00:19:54.214 "qid": 0, 00:19:54.214 "state": "enabled", 00:19:54.214 "thread": "nvmf_tgt_poll_group_000", 00:19:54.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:54.214 "listen_address": { 00:19:54.214 "trtype": "TCP", 00:19:54.214 "adrfam": "IPv4", 00:19:54.214 "traddr": "10.0.0.2", 00:19:54.214 "trsvcid": "4420" 00:19:54.214 }, 00:19:54.214 "peer_address": { 00:19:54.214 "trtype": "TCP", 00:19:54.214 "adrfam": "IPv4", 00:19:54.214 "traddr": "10.0.0.1", 00:19:54.214 "trsvcid": "58808" 00:19:54.214 }, 00:19:54.214 "auth": { 00:19:54.214 "state": "completed", 00:19:54.214 "digest": "sha256", 00:19:54.214 "dhgroup": "ffdhe8192" 00:19:54.214 } 00:19:54.214 } 00:19:54.214 ]' 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.214 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.473 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:54.474 17:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:19:55.041 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.041 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:55.041 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.041 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.300 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.868 00:19:55.868 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.868 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.868 17:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.127 { 00:19:56.127 "cntlid": 43, 00:19:56.127 "qid": 0, 00:19:56.127 "state": "enabled", 00:19:56.127 "thread": "nvmf_tgt_poll_group_000", 00:19:56.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.127 "listen_address": { 00:19:56.127 "trtype": "TCP", 00:19:56.127 "adrfam": "IPv4", 00:19:56.127 "traddr": "10.0.0.2", 00:19:56.127 "trsvcid": "4420" 00:19:56.127 }, 00:19:56.127 "peer_address": { 00:19:56.127 "trtype": "TCP", 00:19:56.127 "adrfam": "IPv4", 00:19:56.127 "traddr": "10.0.0.1", 00:19:56.127 "trsvcid": "42666" 00:19:56.127 }, 00:19:56.127 "auth": { 00:19:56.127 "state": "completed", 00:19:56.127 "digest": "sha256", 00:19:56.127 "dhgroup": "ffdhe8192" 00:19:56.127 } 00:19:56.127 } 00:19:56.127 ]' 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.127 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.387 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:56.387 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.952 17:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.211 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.779 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.779 { 00:19:57.779 "cntlid": 45, 00:19:57.779 "qid": 0, 00:19:57.779 "state": "enabled", 00:19:57.779 "thread": "nvmf_tgt_poll_group_000", 00:19:57.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:57.779 "listen_address": { 00:19:57.779 "trtype": "TCP", 00:19:57.779 "adrfam": "IPv4", 00:19:57.779 "traddr": "10.0.0.2", 00:19:57.779 "trsvcid": "4420" 00:19:57.779 }, 00:19:57.779 "peer_address": { 00:19:57.779 "trtype": "TCP", 00:19:57.779 "adrfam": "IPv4", 00:19:57.779 "traddr": "10.0.0.1", 00:19:57.779 "trsvcid": "42696" 00:19:57.779 }, 00:19:57.779 "auth": { 00:19:57.779 "state": "completed", 00:19:57.779 "digest": "sha256", 00:19:57.779 "dhgroup": "ffdhe8192" 00:19:57.779 } 00:19:57.779 } 00:19:57.779 ]' 00:19:57.779 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.037 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.037 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.037 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.037 17:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.037 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.037 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.037 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.296 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:58.296 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.864 17:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.431 00:19:59.431 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.431 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.431 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.690 { 00:19:59.690 "cntlid": 47, 00:19:59.690 "qid": 0, 00:19:59.690 "state": "enabled", 00:19:59.690 "thread": "nvmf_tgt_poll_group_000", 00:19:59.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:59.690 "listen_address": { 00:19:59.690 "trtype": "TCP", 00:19:59.690 "adrfam": "IPv4", 00:19:59.690 "traddr": "10.0.0.2", 00:19:59.690 "trsvcid": "4420" 00:19:59.690 }, 00:19:59.690 "peer_address": { 00:19:59.690 "trtype": "TCP", 00:19:59.690 "adrfam": "IPv4", 00:19:59.690 "traddr": "10.0.0.1", 00:19:59.690 "trsvcid": "42726" 00:19:59.690 }, 00:19:59.690 "auth": { 00:19:59.690 "state": "completed", 00:19:59.690 "digest": "sha256", 00:19:59.690 "dhgroup": "ffdhe8192" 00:19:59.690 } 00:19:59.690 } 00:19:59.690 ]' 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.690 17:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.950 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:19:59.950 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.519 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.779 17:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.037 00:20:01.037 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.038 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.038 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.296 { 00:20:01.296 "cntlid": 49, 00:20:01.296 "qid": 0, 00:20:01.296 "state": "enabled", 00:20:01.296 "thread": "nvmf_tgt_poll_group_000", 00:20:01.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:01.296 "listen_address": { 00:20:01.296 "trtype": "TCP", 00:20:01.296 "adrfam": "IPv4", 00:20:01.296 "traddr": "10.0.0.2", 00:20:01.296 "trsvcid": "4420" 00:20:01.296 }, 00:20:01.296 "peer_address": { 00:20:01.296 "trtype": "TCP", 00:20:01.296 "adrfam": "IPv4", 00:20:01.296 "traddr": "10.0.0.1", 00:20:01.296 "trsvcid": "42758" 00:20:01.296 }, 00:20:01.296 "auth": { 00:20:01.296 "state": "completed", 00:20:01.296 "digest": "sha384", 00:20:01.296 "dhgroup": "null" 00:20:01.296 } 00:20:01.296 } 00:20:01.296 ]' 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.296 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.555 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:01.555 17:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:02.122 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.122 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.123 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.382 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.640 00:20:02.640 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.640 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.640 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.919 { 00:20:02.919 "cntlid": 51, 00:20:02.919 "qid": 0, 00:20:02.919 "state": "enabled", 00:20:02.919 "thread": "nvmf_tgt_poll_group_000", 00:20:02.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:02.919 "listen_address": { 00:20:02.919 "trtype": "TCP", 00:20:02.919 "adrfam": "IPv4", 00:20:02.919 "traddr": "10.0.0.2", 00:20:02.919 "trsvcid": "4420" 00:20:02.919 }, 00:20:02.919 "peer_address": { 00:20:02.919 "trtype": "TCP", 00:20:02.919 "adrfam": "IPv4", 00:20:02.919 "traddr": "10.0.0.1", 00:20:02.919 "trsvcid": "42786" 00:20:02.919 }, 00:20:02.919 "auth": { 00:20:02.919 "state": "completed", 00:20:02.919 "digest": "sha384", 00:20:02.919 "dhgroup": "null" 00:20:02.919 } 00:20:02.919 } 00:20:02.919 ]' 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.919 17:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.193 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:03.193 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.795 17:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.054 00:20:04.054 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.054 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.054 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.312 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.312 { 00:20:04.312 "cntlid": 53, 00:20:04.312 "qid": 0, 00:20:04.313 "state": "enabled", 00:20:04.313 "thread": "nvmf_tgt_poll_group_000", 00:20:04.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.313 "listen_address": { 00:20:04.313 "trtype": "TCP", 00:20:04.313 "adrfam": "IPv4", 00:20:04.313 "traddr": "10.0.0.2", 00:20:04.313 "trsvcid": "4420" 00:20:04.313 }, 00:20:04.313 "peer_address": { 00:20:04.313 "trtype": "TCP", 00:20:04.313 "adrfam": "IPv4", 00:20:04.313 "traddr": "10.0.0.1", 00:20:04.313 "trsvcid": "42810" 00:20:04.313 }, 00:20:04.313 "auth": { 00:20:04.313 "state": "completed", 00:20:04.313 "digest": "sha384", 00:20:04.313 "dhgroup": "null" 00:20:04.313 } 00:20:04.313 } 00:20:04.313 ]' 00:20:04.313 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.313 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.313 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.313 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.313 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.572 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.572 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.572 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.572 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:04.572 17:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.141 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.400 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.659 00:20:05.660 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.660 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.660 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.919 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.919 { 00:20:05.919 "cntlid": 55, 00:20:05.919 "qid": 0, 00:20:05.919 "state": "enabled", 00:20:05.919 "thread": "nvmf_tgt_poll_group_000", 00:20:05.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:05.919 "listen_address": { 00:20:05.919 "trtype": "TCP", 00:20:05.919 "adrfam": "IPv4", 00:20:05.919 "traddr": "10.0.0.2", 00:20:05.919 "trsvcid": "4420" 00:20:05.919 }, 00:20:05.919 "peer_address": { 00:20:05.919 "trtype": "TCP", 00:20:05.919 "adrfam": "IPv4", 00:20:05.919 "traddr": "10.0.0.1", 00:20:05.919 "trsvcid": "35642" 00:20:05.919 }, 00:20:05.919 "auth": { 00:20:05.919 "state": "completed", 00:20:05.919 "digest": "sha384", 00:20:05.919 "dhgroup": "null" 00:20:05.920 } 00:20:05.920 } 00:20:05.920 ]' 00:20:05.920 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.920 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.920 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.920 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.920 17:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.920 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.920 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.920 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.179 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:06.179 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.748 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.008 17:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.268 00:20:07.268 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.268 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.268 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.528 { 00:20:07.528 "cntlid": 57, 00:20:07.528 "qid": 0, 00:20:07.528 "state": "enabled", 00:20:07.528 "thread": "nvmf_tgt_poll_group_000", 00:20:07.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.528 "listen_address": { 00:20:07.528 "trtype": "TCP", 00:20:07.528 "adrfam": "IPv4", 00:20:07.528 "traddr": "10.0.0.2", 00:20:07.528 "trsvcid": "4420" 00:20:07.528 }, 00:20:07.528 "peer_address": { 00:20:07.528 "trtype": "TCP", 00:20:07.528 "adrfam": "IPv4", 00:20:07.528 "traddr": "10.0.0.1", 00:20:07.528 "trsvcid": "35664" 00:20:07.528 }, 00:20:07.528 "auth": { 00:20:07.528 "state": "completed", 00:20:07.528 "digest": "sha384", 00:20:07.528 "dhgroup": "ffdhe2048" 00:20:07.528 } 00:20:07.528 } 00:20:07.528 ]' 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.528 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.787 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:07.787 17:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.356 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.616 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.876 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.876 { 00:20:08.876 "cntlid": 59, 00:20:08.876 "qid": 0, 00:20:08.876 "state": "enabled", 00:20:08.876 "thread": "nvmf_tgt_poll_group_000", 00:20:08.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:08.876 "listen_address": { 00:20:08.876 "trtype": "TCP", 00:20:08.876 "adrfam": "IPv4", 00:20:08.876 "traddr": "10.0.0.2", 00:20:08.876 "trsvcid": "4420" 00:20:08.876 }, 00:20:08.876 "peer_address": { 00:20:08.876 "trtype": "TCP", 00:20:08.876 "adrfam": "IPv4", 00:20:08.876 "traddr": "10.0.0.1", 00:20:08.876 "trsvcid": "35694" 00:20:08.876 }, 00:20:08.876 "auth": { 00:20:08.876 "state": "completed", 00:20:08.876 "digest": "sha384", 00:20:08.876 "dhgroup": "ffdhe2048" 00:20:08.876 } 00:20:08.876 } 00:20:08.876 ]' 00:20:08.876 17:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.134 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.394 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:09.394 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.962 17:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.962 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.221 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.221 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.221 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.221 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.221 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.481 { 00:20:10.481 "cntlid": 61, 00:20:10.481 "qid": 0, 00:20:10.481 "state": "enabled", 00:20:10.481 "thread": "nvmf_tgt_poll_group_000", 00:20:10.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:10.481 "listen_address": { 00:20:10.481 "trtype": "TCP", 00:20:10.481 "adrfam": "IPv4", 00:20:10.481 "traddr": "10.0.0.2", 00:20:10.481 "trsvcid": "4420" 00:20:10.481 }, 00:20:10.481 "peer_address": { 00:20:10.481 "trtype": "TCP", 00:20:10.481 "adrfam": "IPv4", 00:20:10.481 "traddr": "10.0.0.1", 00:20:10.481 "trsvcid": "35718" 00:20:10.481 }, 00:20:10.481 "auth": { 00:20:10.481 "state": "completed", 00:20:10.481 "digest": "sha384", 00:20:10.481 "dhgroup": "ffdhe2048" 00:20:10.481 } 00:20:10.481 } 00:20:10.481 ]' 00:20:10.481 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.740 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.999 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:10.999 17:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.569 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.828 00:20:11.828 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.828 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.828 17:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.087 { 00:20:12.087 "cntlid": 63, 00:20:12.087 "qid": 0, 00:20:12.087 "state": "enabled", 00:20:12.087 "thread": "nvmf_tgt_poll_group_000", 00:20:12.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:12.087 "listen_address": { 00:20:12.087 "trtype": "TCP", 00:20:12.087 "adrfam": "IPv4", 00:20:12.087 "traddr": "10.0.0.2", 00:20:12.087 "trsvcid": "4420" 00:20:12.087 }, 00:20:12.087 "peer_address": { 00:20:12.087 "trtype": "TCP", 00:20:12.087 "adrfam": "IPv4", 00:20:12.087 "traddr": "10.0.0.1", 00:20:12.087 "trsvcid": "35750" 00:20:12.087 }, 00:20:12.087 "auth": { 00:20:12.087 "state": "completed", 00:20:12.087 "digest": "sha384", 00:20:12.087 "dhgroup": "ffdhe2048" 00:20:12.087 } 00:20:12.087 } 00:20:12.087 ]' 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.087 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:12.346 17:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.913 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.172 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.431 00:20:13.431 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.431 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.431 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.691 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.691 { 00:20:13.691 "cntlid": 65, 00:20:13.691 "qid": 0, 00:20:13.691 "state": "enabled", 00:20:13.691 "thread": "nvmf_tgt_poll_group_000", 00:20:13.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:13.691 "listen_address": { 00:20:13.692 "trtype": "TCP", 00:20:13.692 "adrfam": "IPv4", 00:20:13.692 "traddr": "10.0.0.2", 00:20:13.692 "trsvcid": "4420" 00:20:13.692 }, 00:20:13.692 "peer_address": { 00:20:13.692 "trtype": "TCP", 00:20:13.692 "adrfam": "IPv4", 00:20:13.692 "traddr": "10.0.0.1", 00:20:13.692 "trsvcid": "35784" 00:20:13.692 }, 00:20:13.692 "auth": { 00:20:13.692 "state": "completed", 00:20:13.692 "digest": "sha384", 00:20:13.692 "dhgroup": "ffdhe3072" 00:20:13.692 } 00:20:13.692 } 00:20:13.692 ]' 00:20:13.692 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.692 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.692 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.692 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.692 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.950 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.950 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.950 17:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.950 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:13.950 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.518 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.778 17:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.037 00:20:15.037 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.037 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.037 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.296 { 00:20:15.296 "cntlid": 67, 00:20:15.296 "qid": 0, 00:20:15.296 "state": "enabled", 00:20:15.296 "thread": "nvmf_tgt_poll_group_000", 00:20:15.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:15.296 "listen_address": { 00:20:15.296 "trtype": "TCP", 00:20:15.296 "adrfam": "IPv4", 00:20:15.296 "traddr": "10.0.0.2", 00:20:15.296 "trsvcid": "4420" 00:20:15.296 }, 00:20:15.296 "peer_address": { 00:20:15.296 "trtype": "TCP", 00:20:15.296 "adrfam": "IPv4", 00:20:15.296 "traddr": "10.0.0.1", 00:20:15.296 "trsvcid": "53954" 00:20:15.296 }, 00:20:15.296 "auth": { 00:20:15.296 "state": "completed", 00:20:15.296 "digest": "sha384", 00:20:15.296 "dhgroup": "ffdhe3072" 00:20:15.296 } 00:20:15.296 } 00:20:15.296 ]' 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.296 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.555 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:15.555 17:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.123 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.383 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.642 00:20:16.642 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.642 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.642 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.901 { 00:20:16.901 "cntlid": 69, 00:20:16.901 "qid": 0, 00:20:16.901 "state": "enabled", 00:20:16.901 "thread": "nvmf_tgt_poll_group_000", 00:20:16.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:16.901 "listen_address": { 00:20:16.901 "trtype": "TCP", 00:20:16.901 "adrfam": "IPv4", 00:20:16.901 "traddr": "10.0.0.2", 00:20:16.901 "trsvcid": "4420" 00:20:16.901 }, 00:20:16.901 "peer_address": { 00:20:16.901 "trtype": "TCP", 00:20:16.901 "adrfam": "IPv4", 00:20:16.901 "traddr": "10.0.0.1", 00:20:16.901 "trsvcid": "53980" 00:20:16.901 }, 00:20:16.901 "auth": { 00:20:16.901 "state": "completed", 00:20:16.901 "digest": "sha384", 00:20:16.901 "dhgroup": "ffdhe3072" 00:20:16.901 } 00:20:16.901 } 00:20:16.901 ]' 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.901 17:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.901 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.901 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.901 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.160 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:17.160 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:17.728 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.728 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.728 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.728 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.729 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.729 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.729 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.729 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.988 17:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.988 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.988 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.988 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.988 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.246 00:20:18.246 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.246 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.246 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.505 { 00:20:18.505 "cntlid": 71, 00:20:18.505 "qid": 0, 00:20:18.505 "state": "enabled", 00:20:18.505 "thread": "nvmf_tgt_poll_group_000", 00:20:18.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:18.505 "listen_address": { 00:20:18.505 "trtype": "TCP", 00:20:18.505 "adrfam": "IPv4", 00:20:18.505 "traddr": "10.0.0.2", 00:20:18.505 "trsvcid": "4420" 00:20:18.505 }, 00:20:18.505 "peer_address": { 00:20:18.505 "trtype": "TCP", 00:20:18.505 "adrfam": "IPv4", 00:20:18.505 "traddr": "10.0.0.1", 00:20:18.505 "trsvcid": "54012" 00:20:18.505 }, 00:20:18.505 "auth": { 00:20:18.505 "state": "completed", 00:20:18.505 "digest": "sha384", 00:20:18.505 "dhgroup": "ffdhe3072" 00:20:18.505 } 00:20:18.505 } 00:20:18.505 ]' 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.505 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.764 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:18.764 17:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.332 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.591 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.851 00:20:19.851 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.851 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.851 17:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.110 { 00:20:20.110 "cntlid": 73, 00:20:20.110 "qid": 0, 00:20:20.110 "state": "enabled", 00:20:20.110 "thread": "nvmf_tgt_poll_group_000", 00:20:20.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:20.110 "listen_address": { 00:20:20.110 "trtype": "TCP", 00:20:20.110 "adrfam": "IPv4", 00:20:20.110 "traddr": "10.0.0.2", 00:20:20.110 "trsvcid": "4420" 00:20:20.110 }, 00:20:20.110 "peer_address": { 00:20:20.110 "trtype": "TCP", 00:20:20.110 "adrfam": "IPv4", 00:20:20.110 "traddr": "10.0.0.1", 00:20:20.110 "trsvcid": "54054" 00:20:20.110 }, 00:20:20.110 "auth": { 00:20:20.110 "state": "completed", 00:20:20.110 "digest": "sha384", 00:20:20.110 "dhgroup": "ffdhe4096" 00:20:20.110 } 00:20:20.110 } 00:20:20.110 ]' 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.110 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.369 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:20.369 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.936 17:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.195 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.452 00:20:21.452 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.452 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.452 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.711 { 00:20:21.711 "cntlid": 75, 00:20:21.711 "qid": 0, 00:20:21.711 "state": "enabled", 00:20:21.711 "thread": "nvmf_tgt_poll_group_000", 00:20:21.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:21.711 "listen_address": { 00:20:21.711 "trtype": "TCP", 00:20:21.711 "adrfam": "IPv4", 00:20:21.711 "traddr": "10.0.0.2", 00:20:21.711 "trsvcid": "4420" 00:20:21.711 }, 00:20:21.711 "peer_address": { 00:20:21.711 "trtype": "TCP", 00:20:21.711 "adrfam": "IPv4", 00:20:21.711 "traddr": "10.0.0.1", 00:20:21.711 "trsvcid": "54070" 00:20:21.711 }, 00:20:21.711 "auth": { 00:20:21.711 "state": "completed", 00:20:21.711 "digest": "sha384", 00:20:21.711 "dhgroup": "ffdhe4096" 00:20:21.711 } 00:20:21.711 } 00:20:21.711 ]' 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.711 17:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.971 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:21.971 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.539 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.798 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:22.798 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.798 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.799 17:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.057 00:20:23.057 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.057 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.057 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.315 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.315 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.315 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.315 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.316 { 00:20:23.316 "cntlid": 77, 00:20:23.316 "qid": 0, 00:20:23.316 "state": "enabled", 00:20:23.316 "thread": "nvmf_tgt_poll_group_000", 00:20:23.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:23.316 "listen_address": { 00:20:23.316 "trtype": "TCP", 00:20:23.316 "adrfam": "IPv4", 00:20:23.316 "traddr": "10.0.0.2", 00:20:23.316 "trsvcid": "4420" 00:20:23.316 }, 00:20:23.316 "peer_address": { 00:20:23.316 "trtype": "TCP", 00:20:23.316 "adrfam": "IPv4", 00:20:23.316 "traddr": "10.0.0.1", 00:20:23.316 "trsvcid": "54102" 00:20:23.316 }, 00:20:23.316 "auth": { 00:20:23.316 "state": "completed", 00:20:23.316 "digest": "sha384", 00:20:23.316 "dhgroup": "ffdhe4096" 00:20:23.316 } 00:20:23.316 } 00:20:23.316 ]' 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.316 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.575 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:23.575 17:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:24.145 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.146 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.405 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.666 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.666 { 00:20:24.666 "cntlid": 79, 00:20:24.666 "qid": 0, 00:20:24.666 "state": "enabled", 00:20:24.666 "thread": "nvmf_tgt_poll_group_000", 00:20:24.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:24.666 "listen_address": { 00:20:24.666 "trtype": "TCP", 00:20:24.666 "adrfam": "IPv4", 00:20:24.666 "traddr": "10.0.0.2", 00:20:24.666 "trsvcid": "4420" 00:20:24.666 }, 00:20:24.666 "peer_address": { 00:20:24.666 "trtype": "TCP", 00:20:24.666 "adrfam": "IPv4", 00:20:24.666 "traddr": "10.0.0.1", 00:20:24.666 "trsvcid": "54130" 00:20:24.666 }, 00:20:24.666 "auth": { 00:20:24.666 "state": "completed", 00:20:24.666 "digest": "sha384", 00:20:24.666 "dhgroup": "ffdhe4096" 00:20:24.666 } 00:20:24.666 } 00:20:24.666 ]' 00:20:24.666 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.926 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.927 17:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.186 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:25.186 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.753 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.012 17:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.272 00:20:26.272 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.272 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.272 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.531 { 00:20:26.531 "cntlid": 81, 00:20:26.531 "qid": 0, 00:20:26.531 "state": "enabled", 00:20:26.531 "thread": "nvmf_tgt_poll_group_000", 00:20:26.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:26.531 "listen_address": { 00:20:26.531 "trtype": "TCP", 00:20:26.531 "adrfam": "IPv4", 00:20:26.531 "traddr": "10.0.0.2", 00:20:26.531 "trsvcid": "4420" 00:20:26.531 }, 00:20:26.531 "peer_address": { 00:20:26.531 "trtype": "TCP", 00:20:26.531 "adrfam": "IPv4", 00:20:26.531 "traddr": "10.0.0.1", 00:20:26.531 "trsvcid": "52546" 00:20:26.531 }, 00:20:26.531 "auth": { 00:20:26.531 "state": "completed", 00:20:26.531 "digest": "sha384", 00:20:26.531 "dhgroup": "ffdhe6144" 00:20:26.531 } 00:20:26.531 } 00:20:26.531 ]' 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.531 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.789 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:26.789 17:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.357 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.616 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.875 00:20:27.875 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.875 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.875 17:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.135 { 00:20:28.135 "cntlid": 83, 00:20:28.135 "qid": 0, 00:20:28.135 "state": "enabled", 00:20:28.135 "thread": "nvmf_tgt_poll_group_000", 00:20:28.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:28.135 "listen_address": { 00:20:28.135 "trtype": "TCP", 00:20:28.135 "adrfam": "IPv4", 00:20:28.135 "traddr": "10.0.0.2", 00:20:28.135 "trsvcid": "4420" 00:20:28.135 }, 00:20:28.135 "peer_address": { 00:20:28.135 "trtype": "TCP", 00:20:28.135 "adrfam": "IPv4", 00:20:28.135 "traddr": "10.0.0.1", 00:20:28.135 "trsvcid": "52572" 00:20:28.135 }, 00:20:28.135 "auth": { 00:20:28.135 "state": "completed", 00:20:28.135 "digest": "sha384", 00:20:28.135 "dhgroup": "ffdhe6144" 00:20:28.135 } 00:20:28.135 } 00:20:28.135 ]' 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.135 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.394 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.394 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.394 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.394 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:28.394 17:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.960 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.219 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.220 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.789 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.789 { 00:20:29.789 "cntlid": 85, 00:20:29.789 "qid": 0, 00:20:29.789 "state": "enabled", 00:20:29.789 "thread": "nvmf_tgt_poll_group_000", 00:20:29.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:29.789 "listen_address": { 00:20:29.789 "trtype": "TCP", 00:20:29.789 "adrfam": "IPv4", 00:20:29.789 "traddr": "10.0.0.2", 00:20:29.789 "trsvcid": "4420" 00:20:29.789 }, 00:20:29.789 "peer_address": { 00:20:29.789 "trtype": "TCP", 00:20:29.789 "adrfam": "IPv4", 00:20:29.789 "traddr": "10.0.0.1", 00:20:29.789 "trsvcid": "52590" 00:20:29.789 }, 00:20:29.789 "auth": { 00:20:29.789 "state": "completed", 00:20:29.789 "digest": "sha384", 00:20:29.789 "dhgroup": "ffdhe6144" 00:20:29.789 } 00:20:29.789 } 00:20:29.789 ]' 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.789 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.048 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.048 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.048 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.048 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.048 17:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.048 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:30.048 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:30.617 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.617 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.617 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.617 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.875 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.875 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.875 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.875 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.875 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.876 17:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.444 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.444 { 00:20:31.444 "cntlid": 87, 00:20:31.444 "qid": 0, 00:20:31.444 "state": "enabled", 00:20:31.444 "thread": "nvmf_tgt_poll_group_000", 00:20:31.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:31.444 "listen_address": { 00:20:31.444 "trtype": "TCP", 00:20:31.444 "adrfam": "IPv4", 00:20:31.444 "traddr": "10.0.0.2", 00:20:31.444 "trsvcid": "4420" 00:20:31.444 }, 00:20:31.444 "peer_address": { 00:20:31.444 "trtype": "TCP", 00:20:31.444 "adrfam": "IPv4", 00:20:31.444 "traddr": "10.0.0.1", 00:20:31.444 "trsvcid": "52608" 00:20:31.444 }, 00:20:31.444 "auth": { 00:20:31.444 "state": "completed", 00:20:31.444 "digest": "sha384", 00:20:31.444 "dhgroup": "ffdhe6144" 00:20:31.444 } 00:20:31.444 } 00:20:31.444 ]' 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.444 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.702 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.703 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.703 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.703 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.703 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.961 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:31.961 17:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.530 17:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.097 00:20:33.097 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.097 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.097 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.356 { 00:20:33.356 "cntlid": 89, 00:20:33.356 "qid": 0, 00:20:33.356 "state": "enabled", 00:20:33.356 "thread": "nvmf_tgt_poll_group_000", 00:20:33.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:33.356 "listen_address": { 00:20:33.356 "trtype": "TCP", 00:20:33.356 "adrfam": "IPv4", 00:20:33.356 "traddr": "10.0.0.2", 00:20:33.356 "trsvcid": "4420" 00:20:33.356 }, 00:20:33.356 "peer_address": { 00:20:33.356 "trtype": "TCP", 00:20:33.356 "adrfam": "IPv4", 00:20:33.356 "traddr": "10.0.0.1", 00:20:33.356 "trsvcid": "52626" 00:20:33.356 }, 00:20:33.356 "auth": { 00:20:33.356 "state": "completed", 00:20:33.356 "digest": "sha384", 00:20:33.356 "dhgroup": "ffdhe8192" 00:20:33.356 } 00:20:33.356 } 00:20:33.356 ]' 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.356 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.616 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:33.616 17:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.183 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.442 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.443 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.010 00:20:35.010 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.010 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.010 17:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.269 { 00:20:35.269 "cntlid": 91, 00:20:35.269 "qid": 0, 00:20:35.269 "state": "enabled", 00:20:35.269 "thread": "nvmf_tgt_poll_group_000", 00:20:35.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:35.269 "listen_address": { 00:20:35.269 "trtype": "TCP", 00:20:35.269 "adrfam": "IPv4", 00:20:35.269 "traddr": "10.0.0.2", 00:20:35.269 "trsvcid": "4420" 00:20:35.269 }, 00:20:35.269 "peer_address": { 00:20:35.269 "trtype": "TCP", 00:20:35.269 "adrfam": "IPv4", 00:20:35.269 "traddr": "10.0.0.1", 00:20:35.269 "trsvcid": "58682" 00:20:35.269 }, 00:20:35.269 "auth": { 00:20:35.269 "state": "completed", 00:20:35.269 "digest": "sha384", 00:20:35.269 "dhgroup": "ffdhe8192" 00:20:35.269 } 00:20:35.269 } 00:20:35.269 ]' 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.269 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.527 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:35.527 17:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.095 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.354 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.922 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.922 17:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.922 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.922 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.922 { 00:20:36.922 "cntlid": 93, 00:20:36.922 "qid": 0, 00:20:36.922 "state": "enabled", 00:20:36.922 "thread": "nvmf_tgt_poll_group_000", 00:20:36.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:36.922 "listen_address": { 00:20:36.922 "trtype": "TCP", 00:20:36.922 "adrfam": "IPv4", 00:20:36.922 "traddr": "10.0.0.2", 00:20:36.922 "trsvcid": "4420" 00:20:36.922 }, 00:20:36.922 "peer_address": { 00:20:36.922 "trtype": "TCP", 00:20:36.922 "adrfam": "IPv4", 00:20:36.922 "traddr": "10.0.0.1", 00:20:36.922 "trsvcid": "58712" 00:20:36.922 }, 00:20:36.922 "auth": { 00:20:36.922 "state": "completed", 00:20:36.922 "digest": "sha384", 00:20:36.922 "dhgroup": "ffdhe8192" 00:20:36.922 } 00:20:36.922 } 00:20:36.922 ]' 00:20:36.922 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.922 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.922 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.181 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.181 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.181 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.181 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.181 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.439 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:37.439 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:38.006 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.006 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.006 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.007 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.007 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.007 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.007 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.007 17:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.007 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.575 00:20:38.575 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.575 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.575 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.833 { 00:20:38.833 "cntlid": 95, 00:20:38.833 "qid": 0, 00:20:38.833 "state": "enabled", 00:20:38.833 "thread": "nvmf_tgt_poll_group_000", 00:20:38.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:38.833 "listen_address": { 00:20:38.833 "trtype": "TCP", 00:20:38.833 "adrfam": "IPv4", 00:20:38.833 "traddr": "10.0.0.2", 00:20:38.833 "trsvcid": "4420" 00:20:38.833 }, 00:20:38.833 "peer_address": { 00:20:38.833 "trtype": "TCP", 00:20:38.833 "adrfam": "IPv4", 00:20:38.833 "traddr": "10.0.0.1", 00:20:38.833 "trsvcid": "58742" 00:20:38.833 }, 00:20:38.833 "auth": { 00:20:38.833 "state": "completed", 00:20:38.833 "digest": "sha384", 00:20:38.833 "dhgroup": "ffdhe8192" 00:20:38.833 } 00:20:38.833 } 00:20:38.833 ]' 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.833 17:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.092 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:39.092 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.659 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.918 17:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.209 00:20:40.209 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.209 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.209 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.506 { 00:20:40.506 "cntlid": 97, 00:20:40.506 "qid": 0, 00:20:40.506 "state": "enabled", 00:20:40.506 "thread": "nvmf_tgt_poll_group_000", 00:20:40.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:40.506 "listen_address": { 00:20:40.506 "trtype": "TCP", 00:20:40.506 "adrfam": "IPv4", 00:20:40.506 "traddr": "10.0.0.2", 00:20:40.506 "trsvcid": "4420" 00:20:40.506 }, 00:20:40.506 "peer_address": { 00:20:40.506 "trtype": "TCP", 00:20:40.506 "adrfam": "IPv4", 00:20:40.506 "traddr": "10.0.0.1", 00:20:40.506 "trsvcid": "58768" 00:20:40.506 }, 00:20:40.506 "auth": { 00:20:40.506 "state": "completed", 00:20:40.506 "digest": "sha512", 00:20:40.506 "dhgroup": "null" 00:20:40.506 } 00:20:40.506 } 00:20:40.506 ]' 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.506 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.787 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:40.787 17:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.356 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.357 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.616 00:20:41.616 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.616 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.616 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.875 { 00:20:41.875 "cntlid": 99, 00:20:41.875 "qid": 0, 00:20:41.875 "state": "enabled", 00:20:41.875 "thread": "nvmf_tgt_poll_group_000", 00:20:41.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:41.875 "listen_address": { 00:20:41.875 "trtype": "TCP", 00:20:41.875 "adrfam": "IPv4", 00:20:41.875 "traddr": "10.0.0.2", 00:20:41.875 "trsvcid": "4420" 00:20:41.875 }, 00:20:41.875 "peer_address": { 00:20:41.875 "trtype": "TCP", 00:20:41.875 "adrfam": "IPv4", 00:20:41.875 "traddr": "10.0.0.1", 00:20:41.875 "trsvcid": "58800" 00:20:41.875 }, 00:20:41.875 "auth": { 00:20:41.875 "state": "completed", 00:20:41.875 "digest": "sha512", 00:20:41.875 "dhgroup": "null" 00:20:41.875 } 00:20:41.875 } 00:20:41.875 ]' 00:20:41.875 17:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.875 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.875 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.134 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.134 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.134 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.134 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.134 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.393 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:42.393 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.960 17:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.960 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.961 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.219 00:20:43.219 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.219 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.219 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.477 { 00:20:43.477 "cntlid": 101, 00:20:43.477 "qid": 0, 00:20:43.477 "state": "enabled", 00:20:43.477 "thread": "nvmf_tgt_poll_group_000", 00:20:43.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:43.477 "listen_address": { 00:20:43.477 "trtype": "TCP", 00:20:43.477 "adrfam": "IPv4", 00:20:43.477 "traddr": "10.0.0.2", 00:20:43.477 "trsvcid": "4420" 00:20:43.477 }, 00:20:43.477 "peer_address": { 00:20:43.477 "trtype": "TCP", 00:20:43.477 "adrfam": "IPv4", 00:20:43.477 "traddr": "10.0.0.1", 00:20:43.477 "trsvcid": "58830" 00:20:43.477 }, 00:20:43.477 "auth": { 00:20:43.477 "state": "completed", 00:20:43.477 "digest": "sha512", 00:20:43.477 "dhgroup": "null" 00:20:43.477 } 00:20:43.477 } 00:20:43.477 ]' 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.477 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.478 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.478 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.478 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.736 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.736 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.736 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.736 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:43.737 17:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.303 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.562 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.563 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.822 00:20:44.822 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.822 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.822 17:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.082 { 00:20:45.082 "cntlid": 103, 00:20:45.082 "qid": 0, 00:20:45.082 "state": "enabled", 00:20:45.082 "thread": "nvmf_tgt_poll_group_000", 00:20:45.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:45.082 "listen_address": { 00:20:45.082 "trtype": "TCP", 00:20:45.082 "adrfam": "IPv4", 00:20:45.082 "traddr": "10.0.0.2", 00:20:45.082 "trsvcid": "4420" 00:20:45.082 }, 00:20:45.082 "peer_address": { 00:20:45.082 "trtype": "TCP", 00:20:45.082 "adrfam": "IPv4", 00:20:45.082 "traddr": "10.0.0.1", 00:20:45.082 "trsvcid": "49214" 00:20:45.082 }, 00:20:45.082 "auth": { 00:20:45.082 "state": "completed", 00:20:45.082 "digest": "sha512", 00:20:45.082 "dhgroup": "null" 00:20:45.082 } 00:20:45.082 } 00:20:45.082 ]' 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.082 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.340 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:45.340 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.908 17:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.167 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.426 00:20:46.426 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.426 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.426 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.685 { 00:20:46.685 "cntlid": 105, 00:20:46.685 "qid": 0, 00:20:46.685 "state": "enabled", 00:20:46.685 "thread": "nvmf_tgt_poll_group_000", 00:20:46.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:46.685 "listen_address": { 00:20:46.685 "trtype": "TCP", 00:20:46.685 "adrfam": "IPv4", 00:20:46.685 "traddr": "10.0.0.2", 00:20:46.685 "trsvcid": "4420" 00:20:46.685 }, 00:20:46.685 "peer_address": { 00:20:46.685 "trtype": "TCP", 00:20:46.685 "adrfam": "IPv4", 00:20:46.685 "traddr": "10.0.0.1", 00:20:46.685 "trsvcid": "49238" 00:20:46.685 }, 00:20:46.685 "auth": { 00:20:46.685 "state": "completed", 00:20:46.685 "digest": "sha512", 00:20:46.685 "dhgroup": "ffdhe2048" 00:20:46.685 } 00:20:46.685 } 00:20:46.685 ]' 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.685 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.944 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:46.944 17:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.511 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.770 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.029 00:20:48.029 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.029 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.029 17:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.288 { 00:20:48.288 "cntlid": 107, 00:20:48.288 "qid": 0, 00:20:48.288 "state": "enabled", 00:20:48.288 "thread": "nvmf_tgt_poll_group_000", 00:20:48.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:48.288 "listen_address": { 00:20:48.288 "trtype": "TCP", 00:20:48.288 "adrfam": "IPv4", 00:20:48.288 "traddr": "10.0.0.2", 00:20:48.288 "trsvcid": "4420" 00:20:48.288 }, 00:20:48.288 "peer_address": { 00:20:48.288 "trtype": "TCP", 00:20:48.288 "adrfam": "IPv4", 00:20:48.288 "traddr": "10.0.0.1", 00:20:48.288 "trsvcid": "49264" 00:20:48.288 }, 00:20:48.288 "auth": { 00:20:48.288 "state": "completed", 00:20:48.288 "digest": "sha512", 00:20:48.288 "dhgroup": "ffdhe2048" 00:20:48.288 } 00:20:48.288 } 00:20:48.288 ]' 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.288 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.547 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:48.547 17:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.114 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.373 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.632 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.632 { 00:20:49.632 "cntlid": 109, 00:20:49.632 "qid": 0, 00:20:49.632 "state": "enabled", 00:20:49.632 "thread": "nvmf_tgt_poll_group_000", 00:20:49.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:49.632 "listen_address": { 00:20:49.632 "trtype": "TCP", 00:20:49.632 "adrfam": "IPv4", 00:20:49.632 "traddr": "10.0.0.2", 00:20:49.632 "trsvcid": "4420" 00:20:49.632 }, 00:20:49.632 "peer_address": { 00:20:49.632 "trtype": "TCP", 00:20:49.632 "adrfam": "IPv4", 00:20:49.632 "traddr": "10.0.0.1", 00:20:49.632 "trsvcid": "49288" 00:20:49.632 }, 00:20:49.632 "auth": { 00:20:49.632 "state": "completed", 00:20:49.632 "digest": "sha512", 00:20:49.632 "dhgroup": "ffdhe2048" 00:20:49.632 } 00:20:49.632 } 00:20:49.632 ]' 00:20:49.632 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.891 17:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.149 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:50.149 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.717 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.718 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.976 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.976 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.976 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.976 17:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.976 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.235 { 00:20:51.235 "cntlid": 111, 00:20:51.235 "qid": 0, 00:20:51.235 "state": "enabled", 00:20:51.235 "thread": "nvmf_tgt_poll_group_000", 00:20:51.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:51.235 "listen_address": { 00:20:51.235 "trtype": "TCP", 00:20:51.235 "adrfam": "IPv4", 00:20:51.235 "traddr": "10.0.0.2", 00:20:51.235 "trsvcid": "4420" 00:20:51.235 }, 00:20:51.235 "peer_address": { 00:20:51.235 "trtype": "TCP", 00:20:51.235 "adrfam": "IPv4", 00:20:51.235 "traddr": "10.0.0.1", 00:20:51.235 "trsvcid": "49314" 00:20:51.235 }, 00:20:51.235 "auth": { 00:20:51.235 "state": "completed", 00:20:51.235 "digest": "sha512", 00:20:51.235 "dhgroup": "ffdhe2048" 00:20:51.235 } 00:20:51.235 } 00:20:51.235 ]' 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.235 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.494 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.753 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:51.753 17:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.360 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.619 00:20:52.619 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.619 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.619 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.878 { 00:20:52.878 "cntlid": 113, 00:20:52.878 "qid": 0, 00:20:52.878 "state": "enabled", 00:20:52.878 "thread": "nvmf_tgt_poll_group_000", 00:20:52.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:52.878 "listen_address": { 00:20:52.878 "trtype": "TCP", 00:20:52.878 "adrfam": "IPv4", 00:20:52.878 "traddr": "10.0.0.2", 00:20:52.878 "trsvcid": "4420" 00:20:52.878 }, 00:20:52.878 "peer_address": { 00:20:52.878 "trtype": "TCP", 00:20:52.878 "adrfam": "IPv4", 00:20:52.878 "traddr": "10.0.0.1", 00:20:52.878 "trsvcid": "49342" 00:20:52.878 }, 00:20:52.878 "auth": { 00:20:52.878 "state": "completed", 00:20:52.878 "digest": "sha512", 00:20:52.878 "dhgroup": "ffdhe3072" 00:20:52.878 } 00:20:52.878 } 00:20:52.878 ]' 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.878 17:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.137 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.137 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.137 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.137 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:53.137 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.704 17:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.963 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.964 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.222 00:20:54.222 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.222 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.222 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.480 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.480 { 00:20:54.480 "cntlid": 115, 00:20:54.480 "qid": 0, 00:20:54.480 "state": "enabled", 00:20:54.480 "thread": "nvmf_tgt_poll_group_000", 00:20:54.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:54.480 "listen_address": { 00:20:54.480 "trtype": "TCP", 00:20:54.480 "adrfam": "IPv4", 00:20:54.480 "traddr": "10.0.0.2", 00:20:54.480 "trsvcid": "4420" 00:20:54.480 }, 00:20:54.481 "peer_address": { 00:20:54.481 "trtype": "TCP", 00:20:54.481 "adrfam": "IPv4", 00:20:54.481 "traddr": "10.0.0.1", 00:20:54.481 "trsvcid": "49358" 00:20:54.481 }, 00:20:54.481 "auth": { 00:20:54.481 "state": "completed", 00:20:54.481 "digest": "sha512", 00:20:54.481 "dhgroup": "ffdhe3072" 00:20:54.481 } 00:20:54.481 } 00:20:54.481 ]' 00:20:54.481 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.481 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.481 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.481 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.481 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.739 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.739 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.739 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.739 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:54.739 17:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.308 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.567 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.826 00:20:55.826 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.826 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.826 17:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.085 { 00:20:56.085 "cntlid": 117, 00:20:56.085 "qid": 0, 00:20:56.085 "state": "enabled", 00:20:56.085 "thread": "nvmf_tgt_poll_group_000", 00:20:56.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:56.085 "listen_address": { 00:20:56.085 "trtype": "TCP", 00:20:56.085 "adrfam": "IPv4", 00:20:56.085 "traddr": "10.0.0.2", 00:20:56.085 "trsvcid": "4420" 00:20:56.085 }, 00:20:56.085 "peer_address": { 00:20:56.085 "trtype": "TCP", 00:20:56.085 "adrfam": "IPv4", 00:20:56.085 "traddr": "10.0.0.1", 00:20:56.085 "trsvcid": "42484" 00:20:56.085 }, 00:20:56.085 "auth": { 00:20:56.085 "state": "completed", 00:20:56.085 "digest": "sha512", 00:20:56.085 "dhgroup": "ffdhe3072" 00:20:56.085 } 00:20:56.085 } 00:20:56.085 ]' 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.085 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.344 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:56.344 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:20:56.911 17:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.911 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.171 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.430 00:20:57.430 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.430 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.430 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.688 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.689 { 00:20:57.689 "cntlid": 119, 00:20:57.689 "qid": 0, 00:20:57.689 "state": "enabled", 00:20:57.689 "thread": "nvmf_tgt_poll_group_000", 00:20:57.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:57.689 "listen_address": { 00:20:57.689 "trtype": "TCP", 00:20:57.689 "adrfam": "IPv4", 00:20:57.689 "traddr": "10.0.0.2", 00:20:57.689 "trsvcid": "4420" 00:20:57.689 }, 00:20:57.689 "peer_address": { 00:20:57.689 "trtype": "TCP", 00:20:57.689 "adrfam": "IPv4", 00:20:57.689 "traddr": "10.0.0.1", 00:20:57.689 "trsvcid": "42514" 00:20:57.689 }, 00:20:57.689 "auth": { 00:20:57.689 "state": "completed", 00:20:57.689 "digest": "sha512", 00:20:57.689 "dhgroup": "ffdhe3072" 00:20:57.689 } 00:20:57.689 } 00:20:57.689 ]' 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.689 17:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.947 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:57.947 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.515 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.774 17:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.033 00:20:59.033 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.033 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.033 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.292 { 00:20:59.292 "cntlid": 121, 00:20:59.292 "qid": 0, 00:20:59.292 "state": "enabled", 00:20:59.292 "thread": "nvmf_tgt_poll_group_000", 00:20:59.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:59.292 "listen_address": { 00:20:59.292 "trtype": "TCP", 00:20:59.292 "adrfam": "IPv4", 00:20:59.292 "traddr": "10.0.0.2", 00:20:59.292 "trsvcid": "4420" 00:20:59.292 }, 00:20:59.292 "peer_address": { 00:20:59.292 "trtype": "TCP", 00:20:59.292 "adrfam": "IPv4", 00:20:59.292 "traddr": "10.0.0.1", 00:20:59.292 "trsvcid": "42546" 00:20:59.292 }, 00:20:59.292 "auth": { 00:20:59.292 "state": "completed", 00:20:59.292 "digest": "sha512", 00:20:59.292 "dhgroup": "ffdhe4096" 00:20:59.292 } 00:20:59.292 } 00:20:59.292 ]' 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.292 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.551 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.551 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.551 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.551 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:20:59.552 17:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.119 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.379 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.638 00:21:00.638 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.638 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.638 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.898 { 00:21:00.898 "cntlid": 123, 00:21:00.898 "qid": 0, 00:21:00.898 "state": "enabled", 00:21:00.898 "thread": "nvmf_tgt_poll_group_000", 00:21:00.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:00.898 "listen_address": { 00:21:00.898 "trtype": "TCP", 00:21:00.898 "adrfam": "IPv4", 00:21:00.898 "traddr": "10.0.0.2", 00:21:00.898 "trsvcid": "4420" 00:21:00.898 }, 00:21:00.898 "peer_address": { 00:21:00.898 "trtype": "TCP", 00:21:00.898 "adrfam": "IPv4", 00:21:00.898 "traddr": "10.0.0.1", 00:21:00.898 "trsvcid": "42580" 00:21:00.898 }, 00:21:00.898 "auth": { 00:21:00.898 "state": "completed", 00:21:00.898 "digest": "sha512", 00:21:00.898 "dhgroup": "ffdhe4096" 00:21:00.898 } 00:21:00.898 } 00:21:00.898 ]' 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.898 17:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.898 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.898 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.898 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.157 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:01.157 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.724 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.725 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.725 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.984 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.985 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.985 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.985 17:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.985 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.985 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.985 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.985 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.244 00:21:02.244 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.244 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.244 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.509 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.509 { 00:21:02.509 "cntlid": 125, 00:21:02.509 "qid": 0, 00:21:02.509 "state": "enabled", 00:21:02.509 "thread": "nvmf_tgt_poll_group_000", 00:21:02.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:02.509 "listen_address": { 00:21:02.509 "trtype": "TCP", 00:21:02.509 "adrfam": "IPv4", 00:21:02.509 "traddr": "10.0.0.2", 00:21:02.509 "trsvcid": "4420" 00:21:02.509 }, 00:21:02.509 "peer_address": { 00:21:02.509 "trtype": "TCP", 00:21:02.509 "adrfam": "IPv4", 00:21:02.509 "traddr": "10.0.0.1", 00:21:02.509 "trsvcid": "42618" 00:21:02.509 }, 00:21:02.510 "auth": { 00:21:02.510 "state": "completed", 00:21:02.510 "digest": "sha512", 00:21:02.510 "dhgroup": "ffdhe4096" 00:21:02.510 } 00:21:02.510 } 00:21:02.510 ]' 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.510 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.779 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:02.779 17:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.346 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.605 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.863 00:21:03.863 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.863 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.863 17:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.122 { 00:21:04.122 "cntlid": 127, 00:21:04.122 "qid": 0, 00:21:04.122 "state": "enabled", 00:21:04.122 "thread": "nvmf_tgt_poll_group_000", 00:21:04.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:04.122 "listen_address": { 00:21:04.122 "trtype": "TCP", 00:21:04.122 "adrfam": "IPv4", 00:21:04.122 "traddr": "10.0.0.2", 00:21:04.122 "trsvcid": "4420" 00:21:04.122 }, 00:21:04.122 "peer_address": { 00:21:04.122 "trtype": "TCP", 00:21:04.122 "adrfam": "IPv4", 00:21:04.122 "traddr": "10.0.0.1", 00:21:04.122 "trsvcid": "42644" 00:21:04.122 }, 00:21:04.122 "auth": { 00:21:04.122 "state": "completed", 00:21:04.122 "digest": "sha512", 00:21:04.122 "dhgroup": "ffdhe4096" 00:21:04.122 } 00:21:04.122 } 00:21:04.122 ]' 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.122 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.382 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:04.382 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.950 17:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.209 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.468 00:21:05.468 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.468 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.468 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.727 { 00:21:05.727 "cntlid": 129, 00:21:05.727 "qid": 0, 00:21:05.727 "state": "enabled", 00:21:05.727 "thread": "nvmf_tgt_poll_group_000", 00:21:05.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:05.727 "listen_address": { 00:21:05.727 "trtype": "TCP", 00:21:05.727 "adrfam": "IPv4", 00:21:05.727 "traddr": "10.0.0.2", 00:21:05.727 "trsvcid": "4420" 00:21:05.727 }, 00:21:05.727 "peer_address": { 00:21:05.727 "trtype": "TCP", 00:21:05.727 "adrfam": "IPv4", 00:21:05.727 "traddr": "10.0.0.1", 00:21:05.727 "trsvcid": "37408" 00:21:05.727 }, 00:21:05.727 "auth": { 00:21:05.727 "state": "completed", 00:21:05.727 "digest": "sha512", 00:21:05.727 "dhgroup": "ffdhe6144" 00:21:05.727 } 00:21:05.727 } 00:21:05.727 ]' 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.727 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.728 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.728 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.728 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.728 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.728 17:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.986 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:05.986 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.553 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.811 17:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.070 00:21:07.070 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.070 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.070 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.328 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.328 { 00:21:07.328 "cntlid": 131, 00:21:07.328 "qid": 0, 00:21:07.328 "state": "enabled", 00:21:07.328 "thread": "nvmf_tgt_poll_group_000", 00:21:07.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:07.328 "listen_address": { 00:21:07.329 "trtype": "TCP", 00:21:07.329 "adrfam": "IPv4", 00:21:07.329 "traddr": "10.0.0.2", 00:21:07.329 "trsvcid": "4420" 00:21:07.329 }, 00:21:07.329 "peer_address": { 00:21:07.329 "trtype": "TCP", 00:21:07.329 "adrfam": "IPv4", 00:21:07.329 "traddr": "10.0.0.1", 00:21:07.329 "trsvcid": "37442" 00:21:07.329 }, 00:21:07.329 "auth": { 00:21:07.329 "state": "completed", 00:21:07.329 "digest": "sha512", 00:21:07.329 "dhgroup": "ffdhe6144" 00:21:07.329 } 00:21:07.329 } 00:21:07.329 ]' 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.329 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.588 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:07.588 17:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.156 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.416 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.676 00:21:08.935 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.935 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.935 17:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.935 { 00:21:08.935 "cntlid": 133, 00:21:08.935 "qid": 0, 00:21:08.935 "state": "enabled", 00:21:08.935 "thread": "nvmf_tgt_poll_group_000", 00:21:08.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:08.935 "listen_address": { 00:21:08.935 "trtype": "TCP", 00:21:08.935 "adrfam": "IPv4", 00:21:08.935 "traddr": "10.0.0.2", 00:21:08.935 "trsvcid": "4420" 00:21:08.935 }, 00:21:08.935 "peer_address": { 00:21:08.935 "trtype": "TCP", 00:21:08.935 "adrfam": "IPv4", 00:21:08.935 "traddr": "10.0.0.1", 00:21:08.935 "trsvcid": "37468" 00:21:08.935 }, 00:21:08.935 "auth": { 00:21:08.935 "state": "completed", 00:21:08.935 "digest": "sha512", 00:21:08.935 "dhgroup": "ffdhe6144" 00:21:08.935 } 00:21:08.935 } 00:21:08.935 ]' 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.935 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.194 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.194 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.194 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.194 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.194 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.453 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:09.453 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:10.020 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.020 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:10.020 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.021 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.021 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.021 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.021 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.021 17:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.021 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.589 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.589 { 00:21:10.589 "cntlid": 135, 00:21:10.589 "qid": 0, 00:21:10.589 "state": "enabled", 00:21:10.589 "thread": "nvmf_tgt_poll_group_000", 00:21:10.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:10.589 "listen_address": { 00:21:10.589 "trtype": "TCP", 00:21:10.589 "adrfam": "IPv4", 00:21:10.589 "traddr": "10.0.0.2", 00:21:10.589 "trsvcid": "4420" 00:21:10.589 }, 00:21:10.589 "peer_address": { 00:21:10.589 "trtype": "TCP", 00:21:10.589 "adrfam": "IPv4", 00:21:10.589 "traddr": "10.0.0.1", 00:21:10.589 "trsvcid": "37490" 00:21:10.589 }, 00:21:10.589 "auth": { 00:21:10.589 "state": "completed", 00:21:10.589 "digest": "sha512", 00:21:10.589 "dhgroup": "ffdhe6144" 00:21:10.589 } 00:21:10.589 } 00:21:10.589 ]' 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.589 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.847 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.847 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.847 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.848 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.848 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.848 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.106 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:11.106 17:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.675 17:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.242 00:21:12.242 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.242 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.242 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.500 { 00:21:12.500 "cntlid": 137, 00:21:12.500 "qid": 0, 00:21:12.500 "state": "enabled", 00:21:12.500 "thread": "nvmf_tgt_poll_group_000", 00:21:12.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:12.500 "listen_address": { 00:21:12.500 "trtype": "TCP", 00:21:12.500 "adrfam": "IPv4", 00:21:12.500 "traddr": "10.0.0.2", 00:21:12.500 "trsvcid": "4420" 00:21:12.500 }, 00:21:12.500 "peer_address": { 00:21:12.500 "trtype": "TCP", 00:21:12.500 "adrfam": "IPv4", 00:21:12.500 "traddr": "10.0.0.1", 00:21:12.500 "trsvcid": "37498" 00:21:12.500 }, 00:21:12.500 "auth": { 00:21:12.500 "state": "completed", 00:21:12.500 "digest": "sha512", 00:21:12.500 "dhgroup": "ffdhe8192" 00:21:12.500 } 00:21:12.500 } 00:21:12.500 ]' 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.500 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.759 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:12.759 17:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.326 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.584 17:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.151 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.151 { 00:21:14.151 "cntlid": 139, 00:21:14.151 "qid": 0, 00:21:14.151 "state": "enabled", 00:21:14.151 "thread": "nvmf_tgt_poll_group_000", 00:21:14.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:14.151 "listen_address": { 00:21:14.151 "trtype": "TCP", 00:21:14.151 "adrfam": "IPv4", 00:21:14.151 "traddr": "10.0.0.2", 00:21:14.151 "trsvcid": "4420" 00:21:14.151 }, 00:21:14.151 "peer_address": { 00:21:14.151 "trtype": "TCP", 00:21:14.151 "adrfam": "IPv4", 00:21:14.151 "traddr": "10.0.0.1", 00:21:14.151 "trsvcid": "37530" 00:21:14.151 }, 00:21:14.151 "auth": { 00:21:14.151 "state": "completed", 00:21:14.151 "digest": "sha512", 00:21:14.151 "dhgroup": "ffdhe8192" 00:21:14.151 } 00:21:14.151 } 00:21:14.151 ]' 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.151 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.411 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.411 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.411 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.411 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.411 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.670 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:14.670 17:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: --dhchap-ctrl-secret DHHC-1:02:NTM1MjYxYTk3ZjMzZmNkNTIyNDYyMTI0YjQwNzMzNmM2MmU2YjY2ZjgxN2MwMGQ5MFjF6w==: 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.240 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.808 00:21:15.808 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.808 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.808 17:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.066 { 00:21:16.066 "cntlid": 141, 00:21:16.066 "qid": 0, 00:21:16.066 "state": "enabled", 00:21:16.066 "thread": "nvmf_tgt_poll_group_000", 00:21:16.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:16.066 "listen_address": { 00:21:16.066 "trtype": "TCP", 00:21:16.066 "adrfam": "IPv4", 00:21:16.066 "traddr": "10.0.0.2", 00:21:16.066 "trsvcid": "4420" 00:21:16.066 }, 00:21:16.066 "peer_address": { 00:21:16.066 "trtype": "TCP", 00:21:16.066 "adrfam": "IPv4", 00:21:16.066 "traddr": "10.0.0.1", 00:21:16.066 "trsvcid": "37908" 00:21:16.066 }, 00:21:16.066 "auth": { 00:21:16.066 "state": "completed", 00:21:16.066 "digest": "sha512", 00:21:16.066 "dhgroup": "ffdhe8192" 00:21:16.066 } 00:21:16.066 } 00:21:16.066 ]' 00:21:16.066 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.067 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.325 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:16.325 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:01:NDM1ZDE3ODUxNjY2OGYxZmMzZTVhYTQ5MGE0MGIxMzF8kNYT: 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.894 17:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.153 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.721 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.721 { 00:21:17.721 "cntlid": 143, 00:21:17.721 "qid": 0, 00:21:17.721 "state": "enabled", 00:21:17.721 "thread": "nvmf_tgt_poll_group_000", 00:21:17.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:17.721 "listen_address": { 00:21:17.721 "trtype": "TCP", 00:21:17.721 "adrfam": "IPv4", 00:21:17.721 "traddr": "10.0.0.2", 00:21:17.721 "trsvcid": "4420" 00:21:17.721 }, 00:21:17.721 "peer_address": { 00:21:17.721 "trtype": "TCP", 00:21:17.721 "adrfam": "IPv4", 00:21:17.721 "traddr": "10.0.0.1", 00:21:17.721 "trsvcid": "37924" 00:21:17.721 }, 00:21:17.721 "auth": { 00:21:17.721 "state": "completed", 00:21:17.721 "digest": "sha512", 00:21:17.721 "dhgroup": "ffdhe8192" 00:21:17.721 } 00:21:17.721 } 00:21:17.721 ]' 00:21:17.721 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.003 17:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.309 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:18.309 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:18.956 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.956 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:18.956 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.956 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.956 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.957 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.958 17:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.575 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.575 { 00:21:19.575 "cntlid": 145, 00:21:19.575 "qid": 0, 00:21:19.575 "state": "enabled", 00:21:19.575 "thread": "nvmf_tgt_poll_group_000", 00:21:19.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:19.575 "listen_address": { 00:21:19.575 "trtype": "TCP", 00:21:19.575 "adrfam": "IPv4", 00:21:19.575 "traddr": "10.0.0.2", 00:21:19.575 "trsvcid": "4420" 00:21:19.575 }, 00:21:19.575 "peer_address": { 00:21:19.575 "trtype": "TCP", 00:21:19.575 "adrfam": "IPv4", 00:21:19.575 "traddr": "10.0.0.1", 00:21:19.575 "trsvcid": "37958" 00:21:19.575 }, 00:21:19.575 "auth": { 00:21:19.575 "state": "completed", 00:21:19.575 "digest": "sha512", 00:21:19.575 "dhgroup": "ffdhe8192" 00:21:19.575 } 00:21:19.575 } 00:21:19.575 ]' 00:21:19.575 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.834 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.835 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.094 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:20.094 17:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MTk3NzY0ZjEwZWQxZmZkNGUzMDE1NGVjNmMyMGRjMDUxY2U1MWJmZGViZjEyZmNke0HXCA==: --dhchap-ctrl-secret DHHC-1:03:N2IyZTFjODU0MGUyNzAwZmNkYTBlNDVmMmRjYmE0ZmEwNWU5ZDlhODkyZjQ1YTQzNzU1MmVhMzNiNzkxMTk4MZ2T1P0=: 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:20.662 17:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:20.921 request: 00:21:20.921 { 00:21:20.921 "name": "nvme0", 00:21:20.921 "trtype": "tcp", 00:21:20.921 "traddr": "10.0.0.2", 00:21:20.921 "adrfam": "ipv4", 00:21:20.921 "trsvcid": "4420", 00:21:20.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:20.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:20.921 "prchk_reftag": false, 00:21:20.921 "prchk_guard": false, 00:21:20.921 "hdgst": false, 00:21:20.921 "ddgst": false, 00:21:20.921 "dhchap_key": "key2", 00:21:20.921 "allow_unrecognized_csi": false, 00:21:20.921 "method": "bdev_nvme_attach_controller", 00:21:20.921 "req_id": 1 00:21:20.921 } 00:21:20.921 Got JSON-RPC error response 00:21:20.921 response: 00:21:20.921 { 00:21:20.921 "code": -5, 00:21:20.921 "message": "Input/output error" 00:21:20.921 } 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.921 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.488 request: 00:21:21.488 { 00:21:21.488 "name": "nvme0", 00:21:21.488 "trtype": "tcp", 00:21:21.488 "traddr": "10.0.0.2", 00:21:21.488 "adrfam": "ipv4", 00:21:21.488 "trsvcid": "4420", 00:21:21.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:21.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:21.488 "prchk_reftag": false, 00:21:21.488 "prchk_guard": false, 00:21:21.488 "hdgst": false, 00:21:21.488 "ddgst": false, 00:21:21.488 "dhchap_key": "key1", 00:21:21.488 "dhchap_ctrlr_key": "ckey2", 00:21:21.488 "allow_unrecognized_csi": false, 00:21:21.488 "method": "bdev_nvme_attach_controller", 00:21:21.488 "req_id": 1 00:21:21.488 } 00:21:21.488 Got JSON-RPC error response 00:21:21.488 response: 00:21:21.488 { 00:21:21.488 "code": -5, 00:21:21.488 "message": "Input/output error" 00:21:21.488 } 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:21.488 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.489 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.057 request: 00:21:22.057 { 00:21:22.057 "name": "nvme0", 00:21:22.057 "trtype": "tcp", 00:21:22.057 "traddr": "10.0.0.2", 00:21:22.057 "adrfam": "ipv4", 00:21:22.057 "trsvcid": "4420", 00:21:22.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:22.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:22.057 "prchk_reftag": false, 00:21:22.057 "prchk_guard": false, 00:21:22.057 "hdgst": false, 00:21:22.057 "ddgst": false, 00:21:22.057 "dhchap_key": "key1", 00:21:22.057 "dhchap_ctrlr_key": "ckey1", 00:21:22.057 "allow_unrecognized_csi": false, 00:21:22.057 "method": "bdev_nvme_attach_controller", 00:21:22.057 "req_id": 1 00:21:22.057 } 00:21:22.057 Got JSON-RPC error response 00:21:22.057 response: 00:21:22.057 { 00:21:22.057 "code": -5, 00:21:22.057 "message": "Input/output error" 00:21:22.057 } 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1092992 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1092992 ']' 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1092992 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.057 17:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1092992 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1092992' 00:21:22.057 killing process with pid 1092992 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1092992 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1092992 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1114491 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1114491 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1114491 ']' 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.057 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1114491 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1114491 ']' 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.316 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.575 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.575 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:22.575 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:22.575 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.575 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 null0 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hOa 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xTc ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xTc 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5HZ 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.bAZ ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bAZ 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nNQ 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.AFN ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AFN 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.n7J 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.835 17:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.404 nvme0n1 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.663 { 00:21:23.663 "cntlid": 1, 00:21:23.663 "qid": 0, 00:21:23.663 "state": "enabled", 00:21:23.663 "thread": "nvmf_tgt_poll_group_000", 00:21:23.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:23.663 "listen_address": { 00:21:23.663 "trtype": "TCP", 00:21:23.663 "adrfam": "IPv4", 00:21:23.663 "traddr": "10.0.0.2", 00:21:23.663 "trsvcid": "4420" 00:21:23.663 }, 00:21:23.663 "peer_address": { 00:21:23.663 "trtype": "TCP", 00:21:23.663 "adrfam": "IPv4", 00:21:23.663 "traddr": "10.0.0.1", 00:21:23.663 "trsvcid": "38016" 00:21:23.663 }, 00:21:23.663 "auth": { 00:21:23.663 "state": "completed", 00:21:23.663 "digest": "sha512", 00:21:23.663 "dhgroup": "ffdhe8192" 00:21:23.663 } 00:21:23.663 } 00:21:23.663 ]' 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.663 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.922 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.922 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.922 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.922 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.922 17:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.181 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:24.181 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.749 17:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.008 request: 00:21:25.008 { 00:21:25.008 "name": "nvme0", 00:21:25.008 "trtype": "tcp", 00:21:25.008 "traddr": "10.0.0.2", 00:21:25.008 "adrfam": "ipv4", 00:21:25.008 "trsvcid": "4420", 00:21:25.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:25.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:25.008 "prchk_reftag": false, 00:21:25.008 "prchk_guard": false, 00:21:25.008 "hdgst": false, 00:21:25.008 "ddgst": false, 00:21:25.008 "dhchap_key": "key3", 00:21:25.008 "allow_unrecognized_csi": false, 00:21:25.008 "method": "bdev_nvme_attach_controller", 00:21:25.008 "req_id": 1 00:21:25.008 } 00:21:25.008 Got JSON-RPC error response 00:21:25.008 response: 00:21:25.008 { 00:21:25.008 "code": -5, 00:21:25.008 "message": "Input/output error" 00:21:25.008 } 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:25.008 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.267 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.526 request: 00:21:25.526 { 00:21:25.526 "name": "nvme0", 00:21:25.526 "trtype": "tcp", 00:21:25.526 "traddr": "10.0.0.2", 00:21:25.526 "adrfam": "ipv4", 00:21:25.526 "trsvcid": "4420", 00:21:25.526 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:25.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:25.526 "prchk_reftag": false, 00:21:25.526 "prchk_guard": false, 00:21:25.526 "hdgst": false, 00:21:25.526 "ddgst": false, 00:21:25.526 "dhchap_key": "key3", 00:21:25.526 "allow_unrecognized_csi": false, 00:21:25.526 "method": "bdev_nvme_attach_controller", 00:21:25.526 "req_id": 1 00:21:25.526 } 00:21:25.526 Got JSON-RPC error response 00:21:25.526 response: 00:21:25.526 { 00:21:25.526 "code": -5, 00:21:25.526 "message": "Input/output error" 00:21:25.526 } 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.526 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:25.786 17:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:26.045 request: 00:21:26.045 { 00:21:26.045 "name": "nvme0", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:26.045 "prchk_reftag": false, 00:21:26.045 "prchk_guard": false, 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false, 00:21:26.045 "dhchap_key": "key0", 00:21:26.045 "dhchap_ctrlr_key": "key1", 00:21:26.045 "allow_unrecognized_csi": false, 00:21:26.045 "method": "bdev_nvme_attach_controller", 00:21:26.045 "req_id": 1 00:21:26.045 } 00:21:26.045 Got JSON-RPC error response 00:21:26.045 response: 00:21:26.045 { 00:21:26.045 "code": -5, 00:21:26.045 "message": "Input/output error" 00:21:26.045 } 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:26.045 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:26.304 nvme0n1 00:21:26.304 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:26.304 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:26.304 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:26.563 17:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:27.500 nvme0n1 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:27.501 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.759 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.759 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:27.759 17:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: --dhchap-ctrl-secret DHHC-1:03:MDRkYzFlYTgxOWM0NDdjOGFkMmJhMmRhYWIzOWJmODE1ZGM0MDVjYzJhMTgxODkyOGIwMjI5ZTQwNzI5YmQyZtfrfJA=: 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.328 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:28.586 17:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:29.154 request: 00:21:29.154 { 00:21:29.154 "name": "nvme0", 00:21:29.154 "trtype": "tcp", 00:21:29.154 "traddr": "10.0.0.2", 00:21:29.154 "adrfam": "ipv4", 00:21:29.154 "trsvcid": "4420", 00:21:29.154 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:21:29.154 "prchk_reftag": false, 00:21:29.154 "prchk_guard": false, 00:21:29.154 "hdgst": false, 00:21:29.154 "ddgst": false, 00:21:29.154 "dhchap_key": "key1", 00:21:29.154 "allow_unrecognized_csi": false, 00:21:29.154 "method": "bdev_nvme_attach_controller", 00:21:29.154 "req_id": 1 00:21:29.154 } 00:21:29.154 Got JSON-RPC error response 00:21:29.154 response: 00:21:29.154 { 00:21:29.154 "code": -5, 00:21:29.154 "message": "Input/output error" 00:21:29.154 } 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:29.154 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:29.721 nvme0n1 00:21:29.721 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:29.721 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:29.721 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.979 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.979 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.979 17:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:30.238 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:30.497 nvme0n1 00:21:30.497 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:30.497 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:30.497 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: '' 2s 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: ]] 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTEwNWY3ZTczYWVkMDgzMzJlZDRlYjc4ZDU5NTdjYzHLZ6fW: 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:30.756 17:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: 2s 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:33.292 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: ]] 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2YzMDZlMzgxNmU0OTBlODhkZmNkZGFmYTM3YjM0NTNiZGUzYmYyZmQwZjcwMTc5x5PEsA==: 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:33.293 17:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:35.199 17:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:35.766 nvme0n1 00:21:35.766 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.766 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.766 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.766 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.766 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:35.767 17:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:36.334 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:36.593 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:36.593 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:36.593 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:36.852 17:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:37.111 request: 00:21:37.111 { 00:21:37.111 "name": "nvme0", 00:21:37.111 "dhchap_key": "key1", 00:21:37.111 "dhchap_ctrlr_key": "key3", 00:21:37.111 "method": "bdev_nvme_set_keys", 00:21:37.111 "req_id": 1 00:21:37.111 } 00:21:37.111 Got JSON-RPC error response 00:21:37.111 response: 00:21:37.111 { 00:21:37.111 "code": -13, 00:21:37.111 "message": "Permission denied" 00:21:37.111 } 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:37.111 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.370 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:37.370 17:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:38.307 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:38.307 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:38.307 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:38.566 17:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:39.502 nvme0n1 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:39.502 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:39.761 request: 00:21:39.761 { 00:21:39.761 "name": "nvme0", 00:21:39.761 "dhchap_key": "key2", 00:21:39.761 "dhchap_ctrlr_key": "key0", 00:21:39.761 "method": "bdev_nvme_set_keys", 00:21:39.761 "req_id": 1 00:21:39.761 } 00:21:39.761 Got JSON-RPC error response 00:21:39.761 response: 00:21:39.761 { 00:21:39.761 "code": -13, 00:21:39.761 "message": "Permission denied" 00:21:39.761 } 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:39.761 17:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.020 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:40.020 17:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:40.956 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:40.956 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:40.956 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1093093 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1093093 ']' 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1093093 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093093 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093093' 00:21:41.214 killing process with pid 1093093 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1093093 00:21:41.214 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1093093 00:21:41.473 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:41.473 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:41.473 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.732 rmmod nvme_tcp 00:21:41.732 rmmod nvme_fabrics 00:21:41.732 rmmod nvme_keyring 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1114491 ']' 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1114491 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1114491 ']' 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1114491 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1114491 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1114491' 00:21:41.732 killing process with pid 1114491 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1114491 00:21:41.732 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1114491 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.991 17:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.896 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.896 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hOa /tmp/spdk.key-sha256.5HZ /tmp/spdk.key-sha384.nNQ /tmp/spdk.key-sha512.n7J /tmp/spdk.key-sha512.xTc /tmp/spdk.key-sha384.bAZ /tmp/spdk.key-sha256.AFN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:43.896 00:21:43.896 real 2m31.264s 00:21:43.896 user 5m48.492s 00:21:43.896 sys 0m24.134s 00:21:43.896 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.896 17:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.896 ************************************ 00:21:43.896 END TEST nvmf_auth_target 00:21:43.896 ************************************ 00:21:43.896 17:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:43.896 17:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:43.896 17:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:43.896 17:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.896 17:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.156 ************************************ 00:21:44.156 START TEST nvmf_bdevio_no_huge 00:21:44.156 ************************************ 00:21:44.156 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:44.156 * Looking for test storage... 00:21:44.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:44.156 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:44.156 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:21:44.156 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:44.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.157 --rc genhtml_branch_coverage=1 00:21:44.157 --rc genhtml_function_coverage=1 00:21:44.157 --rc genhtml_legend=1 00:21:44.157 --rc geninfo_all_blocks=1 00:21:44.157 --rc geninfo_unexecuted_blocks=1 00:21:44.157 00:21:44.157 ' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:44.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.157 --rc genhtml_branch_coverage=1 00:21:44.157 --rc genhtml_function_coverage=1 00:21:44.157 --rc genhtml_legend=1 00:21:44.157 --rc geninfo_all_blocks=1 00:21:44.157 --rc geninfo_unexecuted_blocks=1 00:21:44.157 00:21:44.157 ' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:44.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.157 --rc genhtml_branch_coverage=1 00:21:44.157 --rc genhtml_function_coverage=1 00:21:44.157 --rc genhtml_legend=1 00:21:44.157 --rc geninfo_all_blocks=1 00:21:44.157 --rc geninfo_unexecuted_blocks=1 00:21:44.157 00:21:44.157 ' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:44.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.157 --rc genhtml_branch_coverage=1 00:21:44.157 --rc genhtml_function_coverage=1 00:21:44.157 --rc genhtml_legend=1 00:21:44.157 --rc geninfo_all_blocks=1 00:21:44.157 --rc geninfo_unexecuted_blocks=1 00:21:44.157 00:21:44.157 ' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.157 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.158 17:38:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.743 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.743 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.743 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.743 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.744 17:38:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:21:50.744 00:21:50.744 --- 10.0.0.2 ping statistics --- 00:21:50.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.744 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:21:50.744 00:21:50.744 --- 10.0.0.1 ping statistics --- 00:21:50.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.744 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1121149 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1121149 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1121149 ']' 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.744 17:38:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.744 [2024-10-14 17:38:49.265300] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:21:50.744 [2024-10-14 17:38:49.265349] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:50.744 [2024-10-14 17:38:49.344396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.744 [2024-10-14 17:38:49.390904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.744 [2024-10-14 17:38:49.390938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.744 [2024-10-14 17:38:49.390947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.744 [2024-10-14 17:38:49.390953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.744 [2024-10-14 17:38:49.390958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.744 [2024-10-14 17:38:49.392181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.744 [2024-10-14 17:38:49.392287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:50.744 [2024-10-14 17:38:49.392419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.744 [2024-10-14 17:38:49.392419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.003 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.263 [2024-10-14 17:38:50.146391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.263 Malloc0 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.263 [2024-10-14 17:38:50.190695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:51.263 { 00:21:51.263 "params": { 00:21:51.263 "name": "Nvme$subsystem", 00:21:51.263 "trtype": "$TEST_TRANSPORT", 00:21:51.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.263 "adrfam": "ipv4", 00:21:51.263 "trsvcid": "$NVMF_PORT", 00:21:51.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.263 "hdgst": ${hdgst:-false}, 00:21:51.263 "ddgst": ${ddgst:-false} 00:21:51.263 }, 00:21:51.263 "method": "bdev_nvme_attach_controller" 00:21:51.263 } 00:21:51.263 EOF 00:21:51.263 )") 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:21:51.263 17:38:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:51.263 "params": { 00:21:51.263 "name": "Nvme1", 00:21:51.263 "trtype": "tcp", 00:21:51.263 "traddr": "10.0.0.2", 00:21:51.263 "adrfam": "ipv4", 00:21:51.263 "trsvcid": "4420", 00:21:51.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.263 "hdgst": false, 00:21:51.263 "ddgst": false 00:21:51.263 }, 00:21:51.263 "method": "bdev_nvme_attach_controller" 00:21:51.263 }' 00:21:51.263 [2024-10-14 17:38:50.242263] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:21:51.263 [2024-10-14 17:38:50.242311] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1121397 ] 00:21:51.263 [2024-10-14 17:38:50.314270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.263 [2024-10-14 17:38:50.362240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.263 [2024-10-14 17:38:50.362346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.263 [2024-10-14 17:38:50.362347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.523 I/O targets: 00:21:51.523 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:51.523 00:21:51.523 00:21:51.523 CUnit - A unit testing framework for C - Version 2.1-3 00:21:51.523 http://cunit.sourceforge.net/ 00:21:51.523 00:21:51.523 00:21:51.523 Suite: bdevio tests on: Nvme1n1 00:21:51.523 Test: blockdev write read block ...passed 00:21:51.523 Test: blockdev write zeroes read block ...passed 00:21:51.781 Test: blockdev write zeroes read no split ...passed 00:21:51.781 Test: blockdev write zeroes read split ...passed 00:21:51.781 Test: blockdev write zeroes read split partial ...passed 00:21:51.781 Test: blockdev reset ...[2024-10-14 17:38:50.736457] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.781 [2024-10-14 17:38:50.736519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1644a20 (9): Bad file descriptor 00:21:51.782 [2024-10-14 17:38:50.748722] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:51.782 passed 00:21:51.782 Test: blockdev write read 8 blocks ...passed 00:21:51.782 Test: blockdev write read size > 128k ...passed 00:21:51.782 Test: blockdev write read invalid size ...passed 00:21:51.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.782 Test: blockdev write read max offset ...passed 00:21:51.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.782 Test: blockdev writev readv 8 blocks ...passed 00:21:51.782 Test: blockdev writev readv 30 x 1block ...passed 00:21:52.041 Test: blockdev writev readv block ...passed 00:21:52.041 Test: blockdev writev readv size > 128k ...passed 00:21:52.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:52.041 Test: blockdev comparev and writev ...[2024-10-14 17:38:50.958418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.958459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.958726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.958952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.958973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.958979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.959199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.959209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:50.959220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:52.041 [2024-10-14 17:38:50.959227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:52.041 passed 00:21:52.041 Test: blockdev nvme passthru rw ...passed 00:21:52.041 Test: blockdev nvme passthru vendor specific ...[2024-10-14 17:38:51.041037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.041 [2024-10-14 17:38:51.041056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:51.041180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.041 [2024-10-14 17:38:51.041190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:51.041293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.041 [2024-10-14 17:38:51.041302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:52.041 [2024-10-14 17:38:51.041403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.042 [2024-10-14 17:38:51.041413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:52.042 passed 00:21:52.042 Test: blockdev nvme admin passthru ...passed 00:21:52.042 Test: blockdev copy ...passed 00:21:52.042 00:21:52.042 Run Summary: Type Total Ran Passed Failed Inactive 00:21:52.042 suites 1 1 n/a 0 0 00:21:52.042 tests 23 23 23 0 0 00:21:52.042 asserts 152 152 152 0 n/a 00:21:52.042 00:21:52.042 Elapsed time = 1.064 seconds 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.301 rmmod nvme_tcp 00:21:52.301 rmmod nvme_fabrics 00:21:52.301 rmmod nvme_keyring 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1121149 ']' 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1121149 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1121149 ']' 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1121149 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.301 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1121149 00:21:52.560 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:52.560 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:52.560 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1121149' 00:21:52.560 killing process with pid 1121149 00:21:52.560 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1121149 00:21:52.560 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1121149 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.819 17:38:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.726 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.726 00:21:54.726 real 0m10.806s 00:21:54.726 user 0m13.145s 00:21:54.726 sys 0m5.413s 00:21:54.726 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.726 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:54.726 ************************************ 00:21:54.726 END TEST nvmf_bdevio_no_huge 00:21:54.726 ************************************ 00:21:54.986 17:38:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.986 17:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.986 17:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.986 17:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:54.986 ************************************ 00:21:54.986 START TEST nvmf_tls 00:21:54.986 ************************************ 00:21:54.986 17:38:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.986 * Looking for test storage... 00:21:54.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:54.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.987 --rc genhtml_branch_coverage=1 00:21:54.987 --rc genhtml_function_coverage=1 00:21:54.987 --rc genhtml_legend=1 00:21:54.987 --rc geninfo_all_blocks=1 00:21:54.987 --rc geninfo_unexecuted_blocks=1 00:21:54.987 00:21:54.987 ' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:54.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.987 --rc genhtml_branch_coverage=1 00:21:54.987 --rc genhtml_function_coverage=1 00:21:54.987 --rc genhtml_legend=1 00:21:54.987 --rc geninfo_all_blocks=1 00:21:54.987 --rc geninfo_unexecuted_blocks=1 00:21:54.987 00:21:54.987 ' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:54.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.987 --rc genhtml_branch_coverage=1 00:21:54.987 --rc genhtml_function_coverage=1 00:21:54.987 --rc genhtml_legend=1 00:21:54.987 --rc geninfo_all_blocks=1 00:21:54.987 --rc geninfo_unexecuted_blocks=1 00:21:54.987 00:21:54.987 ' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:54.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.987 --rc genhtml_branch_coverage=1 00:21:54.987 --rc genhtml_function_coverage=1 00:21:54.987 --rc genhtml_legend=1 00:21:54.987 --rc geninfo_all_blocks=1 00:21:54.987 --rc geninfo_unexecuted_blocks=1 00:21:54.987 00:21:54.987 ' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.987 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.247 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.247 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:55.247 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.248 17:38:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.819 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.819 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.819 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.820 17:38:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:22:01.820 00:22:01.820 --- 10.0.0.2 ping statistics --- 00:22:01.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.820 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:22:01.820 00:22:01.820 --- 10.0.0.1 ping statistics --- 00:22:01.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.820 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1125180 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1125180 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1125180 ']' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.820 [2024-10-14 17:39:00.183217] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:01.820 [2024-10-14 17:39:00.183260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.820 [2024-10-14 17:39:00.258044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.820 [2024-10-14 17:39:00.298755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.820 [2024-10-14 17:39:00.298792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.820 [2024-10-14 17:39:00.298800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.820 [2024-10-14 17:39:00.298805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.820 [2024-10-14 17:39:00.298811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.820 [2024-10-14 17:39:00.299407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:01.820 true 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:01.820 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.079 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.079 17:39:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:02.079 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:02.079 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:02.079 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:02.339 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.339 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:02.602 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:02.862 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.862 17:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:03.121 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:03.121 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:03.121 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:22:03.380 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zCroqZxuk1 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.NpqqPHjz6y 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zCroqZxuk1 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.NpqqPHjz6y 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:03.639 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:03.898 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zCroqZxuk1 00:22:03.898 17:39:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zCroqZxuk1 00:22:03.898 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.157 [2024-10-14 17:39:03.161103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.157 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.416 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.416 [2024-10-14 17:39:03.518006] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.416 [2024-10-14 17:39:03.518221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.416 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:04.675 malloc0 00:22:04.675 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:04.934 17:39:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zCroqZxuk1 00:22:04.934 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:05.194 17:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zCroqZxuk1 00:22:15.307 Initializing NVMe Controllers 00:22:15.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.307 Initialization complete. Launching workers. 00:22:15.307 ======================================================== 00:22:15.307 Latency(us) 00:22:15.307 Device Information : IOPS MiB/s Average min max 00:22:15.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16844.63 65.80 3799.50 835.28 5829.31 00:22:15.307 ======================================================== 00:22:15.307 Total : 16844.63 65.80 3799.50 835.28 5829.31 00:22:15.307 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCroqZxuk1 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zCroqZxuk1 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.307 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1128022 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1128022 /var/tmp/bdevperf.sock 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1128022 ']' 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.308 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.308 [2024-10-14 17:39:14.409298] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:15.308 [2024-10-14 17:39:14.409343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128022 ] 00:22:15.567 [2024-10-14 17:39:14.476222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.567 [2024-10-14 17:39:14.518220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.568 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.568 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:15.568 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCroqZxuk1 00:22:15.827 17:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.827 [2024-10-14 17:39:14.951976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.086 TLSTESTn1 00:22:16.086 17:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:16.086 Running I/O for 10 seconds... 00:22:18.402 4952.00 IOPS, 19.34 MiB/s [2024-10-14T15:39:18.478Z] 4982.00 IOPS, 19.46 MiB/s [2024-10-14T15:39:19.416Z] 4955.67 IOPS, 19.36 MiB/s [2024-10-14T15:39:20.353Z] 4995.25 IOPS, 19.51 MiB/s [2024-10-14T15:39:21.291Z] 4984.40 IOPS, 19.47 MiB/s [2024-10-14T15:39:22.230Z] 5012.00 IOPS, 19.58 MiB/s [2024-10-14T15:39:23.167Z] 5021.86 IOPS, 19.62 MiB/s [2024-10-14T15:39:24.545Z] 5028.38 IOPS, 19.64 MiB/s [2024-10-14T15:39:25.483Z] 5023.22 IOPS, 19.62 MiB/s [2024-10-14T15:39:25.483Z] 5006.80 IOPS, 19.56 MiB/s 00:22:26.345 Latency(us) 00:22:26.345 [2024-10-14T15:39:25.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.345 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:26.345 Verification LBA range: start 0x0 length 0x2000 00:22:26.345 TLSTESTn1 : 10.02 5011.03 19.57 0.00 0.00 25507.41 5835.82 50681.17 00:22:26.345 [2024-10-14T15:39:25.483Z] =================================================================================================================== 00:22:26.345 [2024-10-14T15:39:25.483Z] Total : 5011.03 19.57 0.00 0.00 25507.41 5835.82 50681.17 00:22:26.345 { 00:22:26.345 "results": [ 00:22:26.345 { 00:22:26.345 "job": "TLSTESTn1", 00:22:26.345 "core_mask": "0x4", 00:22:26.345 "workload": "verify", 00:22:26.345 "status": "finished", 00:22:26.345 "verify_range": { 00:22:26.345 "start": 0, 00:22:26.345 "length": 8192 00:22:26.345 }, 00:22:26.345 "queue_depth": 128, 00:22:26.345 "io_size": 4096, 00:22:26.345 "runtime": 10.017094, 00:22:26.345 "iops": 5011.034138244086, 00:22:26.345 "mibps": 19.57435210251596, 00:22:26.345 "io_failed": 0, 00:22:26.345 "io_timeout": 0, 00:22:26.345 "avg_latency_us": 25507.41183247385, 00:22:26.345 "min_latency_us": 5835.8247619047615, 00:22:26.345 "max_latency_us": 50681.17333333333 00:22:26.346 } 00:22:26.346 ], 00:22:26.346 "core_count": 1 00:22:26.346 } 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1128022 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1128022 ']' 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1128022 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1128022 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1128022' 00:22:26.346 killing process with pid 1128022 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1128022 00:22:26.346 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.346 00:22:26.346 Latency(us) 00:22:26.346 [2024-10-14T15:39:25.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.346 [2024-10-14T15:39:25.484Z] =================================================================================================================== 00:22:26.346 [2024-10-14T15:39:25.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1128022 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NpqqPHjz6y 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NpqqPHjz6y 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NpqqPHjz6y 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NpqqPHjz6y 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1129870 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1129870 /var/tmp/bdevperf.sock 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1129870 ']' 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.346 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.346 [2024-10-14 17:39:25.449698] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:26.346 [2024-10-14 17:39:25.449745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129870 ] 00:22:26.606 [2024-10-14 17:39:25.507599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.606 [2024-10-14 17:39:25.550183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.606 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.606 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:26.606 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NpqqPHjz6y 00:22:26.865 17:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.865 [2024-10-14 17:39:25.992010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.865 [2024-10-14 17:39:25.996687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.865 [2024-10-14 17:39:25.997320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76c230 (107): Transport endpoint is not connected 00:22:26.866 [2024-10-14 17:39:25.998310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76c230 (9): Bad file descriptor 00:22:26.866 [2024-10-14 17:39:25.999311] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.866 [2024-10-14 17:39:25.999323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:26.866 [2024-10-14 17:39:25.999330] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:26.866 [2024-10-14 17:39:25.999340] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.866 request: 00:22:26.866 { 00:22:26.866 "name": "TLSTEST", 00:22:26.866 "trtype": "tcp", 00:22:26.866 "traddr": "10.0.0.2", 00:22:26.866 "adrfam": "ipv4", 00:22:26.866 "trsvcid": "4420", 00:22:26.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.866 "prchk_reftag": false, 00:22:26.866 "prchk_guard": false, 00:22:26.866 "hdgst": false, 00:22:26.866 "ddgst": false, 00:22:26.866 "psk": "key0", 00:22:26.866 "allow_unrecognized_csi": false, 00:22:26.866 "method": "bdev_nvme_attach_controller", 00:22:26.866 "req_id": 1 00:22:26.866 } 00:22:26.866 Got JSON-RPC error response 00:22:26.866 response: 00:22:26.866 { 00:22:26.866 "code": -5, 00:22:26.866 "message": "Input/output error" 00:22:26.866 } 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1129870 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1129870 ']' 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1129870 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1129870 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1129870' 00:22:27.126 killing process with pid 1129870 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1129870 00:22:27.126 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.126 00:22:27.126 Latency(us) 00:22:27.126 [2024-10-14T15:39:26.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.126 [2024-10-14T15:39:26.264Z] =================================================================================================================== 00:22:27.126 [2024-10-14T15:39:26.264Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1129870 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zCroqZxuk1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zCroqZxuk1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zCroqZxuk1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zCroqZxuk1 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1129890 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1129890 /var/tmp/bdevperf.sock 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1129890 ']' 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.126 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.126 [2024-10-14 17:39:26.265593] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:27.126 [2024-10-14 17:39:26.265646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129890 ] 00:22:27.385 [2024-10-14 17:39:26.330256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.385 [2024-10-14 17:39:26.370119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.385 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.385 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.385 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCroqZxuk1 00:22:27.644 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:27.904 [2024-10-14 17:39:26.828468] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.904 [2024-10-14 17:39:26.837165] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.904 [2024-10-14 17:39:26.837187] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.904 [2024-10-14 17:39:26.837227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.904 [2024-10-14 17:39:26.837832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257230 (107): Transport endpoint is not connected 00:22:27.904 [2024-10-14 17:39:26.838825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257230 (9): Bad file descriptor 00:22:27.904 [2024-10-14 17:39:26.839827] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.904 [2024-10-14 17:39:26.839836] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.904 [2024-10-14 17:39:26.839842] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:27.904 [2024-10-14 17:39:26.839852] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.904 request: 00:22:27.904 { 00:22:27.904 "name": "TLSTEST", 00:22:27.904 "trtype": "tcp", 00:22:27.904 "traddr": "10.0.0.2", 00:22:27.904 "adrfam": "ipv4", 00:22:27.904 "trsvcid": "4420", 00:22:27.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.904 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.904 "prchk_reftag": false, 00:22:27.904 "prchk_guard": false, 00:22:27.904 "hdgst": false, 00:22:27.904 "ddgst": false, 00:22:27.904 "psk": "key0", 00:22:27.904 "allow_unrecognized_csi": false, 00:22:27.904 "method": "bdev_nvme_attach_controller", 00:22:27.904 "req_id": 1 00:22:27.904 } 00:22:27.904 Got JSON-RPC error response 00:22:27.904 response: 00:22:27.904 { 00:22:27.904 "code": -5, 00:22:27.904 "message": "Input/output error" 00:22:27.904 } 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1129890 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1129890 ']' 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1129890 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1129890 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1129890' 00:22:27.904 killing process with pid 1129890 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1129890 00:22:27.904 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.904 00:22:27.904 Latency(us) 00:22:27.904 [2024-10-14T15:39:27.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.904 [2024-10-14T15:39:27.042Z] =================================================================================================================== 00:22:27.904 [2024-10-14T15:39:27.042Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.904 17:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1129890 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCroqZxuk1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCroqZxuk1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCroqZxuk1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zCroqZxuk1 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130125 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130125 /var/tmp/bdevperf.sock 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1130125 ']' 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.164 [2024-10-14 17:39:27.107015] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:28.164 [2024-10-14 17:39:27.107062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130125 ] 00:22:28.164 [2024-10-14 17:39:27.170147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.164 [2024-10-14 17:39:27.212033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.164 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCroqZxuk1 00:22:28.424 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:28.684 [2024-10-14 17:39:27.649675] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.684 [2024-10-14 17:39:27.659239] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:28.684 [2024-10-14 17:39:27.659261] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:28.684 [2024-10-14 17:39:27.659284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.684 [2024-10-14 17:39:27.659977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dc230 (107): Transport endpoint is not connected 00:22:28.684 [2024-10-14 17:39:27.660970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dc230 (9): Bad file descriptor 00:22:28.684 [2024-10-14 17:39:27.661972] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:28.684 [2024-10-14 17:39:27.661981] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.684 [2024-10-14 17:39:27.661989] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:28.684 [2024-10-14 17:39:27.661998] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:28.684 request: 00:22:28.684 { 00:22:28.684 "name": "TLSTEST", 00:22:28.684 "trtype": "tcp", 00:22:28.684 "traddr": "10.0.0.2", 00:22:28.684 "adrfam": "ipv4", 00:22:28.684 "trsvcid": "4420", 00:22:28.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.684 "prchk_reftag": false, 00:22:28.684 "prchk_guard": false, 00:22:28.684 "hdgst": false, 00:22:28.684 "ddgst": false, 00:22:28.684 "psk": "key0", 00:22:28.684 "allow_unrecognized_csi": false, 00:22:28.684 "method": "bdev_nvme_attach_controller", 00:22:28.684 "req_id": 1 00:22:28.684 } 00:22:28.684 Got JSON-RPC error response 00:22:28.684 response: 00:22:28.684 { 00:22:28.684 "code": -5, 00:22:28.684 "message": "Input/output error" 00:22:28.684 } 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130125 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1130125 ']' 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1130125 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130125 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130125' 00:22:28.684 killing process with pid 1130125 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1130125 00:22:28.684 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.684 00:22:28.684 Latency(us) 00:22:28.684 [2024-10-14T15:39:27.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.684 [2024-10-14T15:39:27.822Z] =================================================================================================================== 00:22:28.684 [2024-10-14T15:39:27.822Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.684 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1130125 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.944 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130232 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130232 /var/tmp/bdevperf.sock 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1130232 ']' 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.945 17:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.945 [2024-10-14 17:39:27.946343] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:28.945 [2024-10-14 17:39:27.946395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130232 ] 00:22:28.945 [2024-10-14 17:39:28.015055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.945 [2024-10-14 17:39:28.053560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.205 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.205 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:29.205 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:29.205 [2024-10-14 17:39:28.311434] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:29.205 [2024-10-14 17:39:28.311464] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:29.205 request: 00:22:29.205 { 00:22:29.205 "name": "key0", 00:22:29.205 "path": "", 00:22:29.205 "method": "keyring_file_add_key", 00:22:29.205 "req_id": 1 00:22:29.205 } 00:22:29.205 Got JSON-RPC error response 00:22:29.205 response: 00:22:29.205 { 00:22:29.205 "code": -1, 00:22:29.205 "message": "Operation not permitted" 00:22:29.205 } 00:22:29.205 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.464 [2024-10-14 17:39:28.495999] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.464 [2024-10-14 17:39:28.496026] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:29.464 request: 00:22:29.464 { 00:22:29.464 "name": "TLSTEST", 00:22:29.464 "trtype": "tcp", 00:22:29.464 "traddr": "10.0.0.2", 00:22:29.464 "adrfam": "ipv4", 00:22:29.464 "trsvcid": "4420", 00:22:29.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.464 "prchk_reftag": false, 00:22:29.464 "prchk_guard": false, 00:22:29.464 "hdgst": false, 00:22:29.464 "ddgst": false, 00:22:29.464 "psk": "key0", 00:22:29.464 "allow_unrecognized_csi": false, 00:22:29.464 "method": "bdev_nvme_attach_controller", 00:22:29.464 "req_id": 1 00:22:29.464 } 00:22:29.464 Got JSON-RPC error response 00:22:29.464 response: 00:22:29.464 { 00:22:29.464 "code": -126, 00:22:29.464 "message": "Required key not available" 00:22:29.464 } 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130232 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1130232 ']' 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1130232 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130232 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130232' 00:22:29.464 killing process with pid 1130232 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1130232 00:22:29.464 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.464 00:22:29.464 Latency(us) 00:22:29.464 [2024-10-14T15:39:28.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.464 [2024-10-14T15:39:28.602Z] =================================================================================================================== 00:22:29.464 [2024-10-14T15:39:28.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.464 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1130232 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1125180 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1125180 ']' 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1125180 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125180 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125180' 00:22:29.723 killing process with pid 1125180 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1125180 00:22:29.723 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1125180 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Sa0ojhDK0R 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Sa0ojhDK0R 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1130390 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1130390 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1130390 ']' 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.983 17:39:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.983 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.983 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.983 [2024-10-14 17:39:29.040887] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:29.983 [2024-10-14 17:39:29.040932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.983 [2024-10-14 17:39:29.112555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.242 [2024-10-14 17:39:29.152486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.242 [2024-10-14 17:39:29.152524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.242 [2024-10-14 17:39:29.152531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.242 [2024-10-14 17:39:29.152537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.242 [2024-10-14 17:39:29.152542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.242 [2024-10-14 17:39:29.153113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.242 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.242 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:30.242 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:30.242 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.242 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.243 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.243 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:22:30.243 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sa0ojhDK0R 00:22:30.243 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.502 [2024-10-14 17:39:29.455439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.502 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.761 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.761 [2024-10-14 17:39:29.828395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.761 [2024-10-14 17:39:29.828638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.761 17:39:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:31.020 malloc0 00:22:31.020 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:31.280 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sa0ojhDK0R 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sa0ojhDK0R 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130667 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130667 /var/tmp/bdevperf.sock 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1130667 ']' 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.539 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.539 [2024-10-14 17:39:30.655495] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:31.539 [2024-10-14 17:39:30.655541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130667 ] 00:22:31.799 [2024-10-14 17:39:30.722405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.799 [2024-10-14 17:39:30.764111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.799 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.799 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.799 17:39:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:32.058 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.317 [2024-10-14 17:39:31.213971] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.317 TLSTESTn1 00:22:32.317 17:39:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:32.317 Running I/O for 10 seconds... 00:22:34.635 5429.00 IOPS, 21.21 MiB/s [2024-10-14T15:39:34.712Z] 5507.50 IOPS, 21.51 MiB/s [2024-10-14T15:39:35.650Z] 5456.33 IOPS, 21.31 MiB/s [2024-10-14T15:39:36.587Z] 5245.25 IOPS, 20.49 MiB/s [2024-10-14T15:39:37.526Z] 5193.60 IOPS, 20.29 MiB/s [2024-10-14T15:39:38.466Z] 5156.67 IOPS, 20.14 MiB/s [2024-10-14T15:39:39.404Z] 5145.71 IOPS, 20.10 MiB/s [2024-10-14T15:39:40.781Z] 5116.12 IOPS, 19.98 MiB/s [2024-10-14T15:39:41.718Z] 5113.67 IOPS, 19.98 MiB/s [2024-10-14T15:39:41.718Z] 5114.60 IOPS, 19.98 MiB/s 00:22:42.580 Latency(us) 00:22:42.580 [2024-10-14T15:39:41.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.580 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.580 Verification LBA range: start 0x0 length 0x2000 00:22:42.580 TLSTESTn1 : 10.02 5119.10 20.00 0.00 0.00 24969.31 6366.35 31332.45 00:22:42.580 [2024-10-14T15:39:41.718Z] =================================================================================================================== 00:22:42.580 [2024-10-14T15:39:41.718Z] Total : 5119.10 20.00 0.00 0.00 24969.31 6366.35 31332.45 00:22:42.580 { 00:22:42.580 "results": [ 00:22:42.580 { 00:22:42.580 "job": "TLSTESTn1", 00:22:42.580 "core_mask": "0x4", 00:22:42.580 "workload": "verify", 00:22:42.580 "status": "finished", 00:22:42.580 "verify_range": { 00:22:42.580 "start": 0, 00:22:42.580 "length": 8192 00:22:42.580 }, 00:22:42.580 "queue_depth": 128, 00:22:42.580 "io_size": 4096, 00:22:42.580 "runtime": 10.016209, 00:22:42.580 "iops": 5119.102446843911, 00:22:42.580 "mibps": 19.996493932984027, 00:22:42.580 "io_failed": 0, 00:22:42.580 "io_timeout": 0, 00:22:42.580 "avg_latency_us": 24969.312835280853, 00:22:42.580 "min_latency_us": 6366.354285714286, 00:22:42.580 "max_latency_us": 31332.449523809522 00:22:42.580 } 00:22:42.580 ], 00:22:42.580 "core_count": 1 00:22:42.580 } 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1130667 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1130667 ']' 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1130667 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130667 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130667' 00:22:42.580 killing process with pid 1130667 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1130667 00:22:42.580 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.580 00:22:42.580 Latency(us) 00:22:42.580 [2024-10-14T15:39:41.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.580 [2024-10-14T15:39:41.718Z] =================================================================================================================== 00:22:42.580 [2024-10-14T15:39:41.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1130667 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Sa0ojhDK0R 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sa0ojhDK0R 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sa0ojhDK0R 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sa0ojhDK0R 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sa0ojhDK0R 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1132480 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1132480 /var/tmp/bdevperf.sock 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1132480 ']' 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.580 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 [2024-10-14 17:39:41.706020] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:42.580 [2024-10-14 17:39:41.706066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132480 ] 00:22:42.839 [2024-10-14 17:39:41.768524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.839 [2024-10-14 17:39:41.809920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.839 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.839 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:42.839 17:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:43.098 [2024-10-14 17:39:42.066975] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Sa0ojhDK0R': 0100666 00:22:43.098 [2024-10-14 17:39:42.067003] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:43.098 request: 00:22:43.098 { 00:22:43.098 "name": "key0", 00:22:43.098 "path": "/tmp/tmp.Sa0ojhDK0R", 00:22:43.098 "method": "keyring_file_add_key", 00:22:43.098 "req_id": 1 00:22:43.098 } 00:22:43.098 Got JSON-RPC error response 00:22:43.098 response: 00:22:43.098 { 00:22:43.098 "code": -1, 00:22:43.098 "message": "Operation not permitted" 00:22:43.098 } 00:22:43.098 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.357 [2024-10-14 17:39:42.239503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.357 [2024-10-14 17:39:42.239527] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:43.357 request: 00:22:43.357 { 00:22:43.357 "name": "TLSTEST", 00:22:43.357 "trtype": "tcp", 00:22:43.357 "traddr": "10.0.0.2", 00:22:43.357 "adrfam": "ipv4", 00:22:43.357 "trsvcid": "4420", 00:22:43.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.357 "prchk_reftag": false, 00:22:43.357 "prchk_guard": false, 00:22:43.357 "hdgst": false, 00:22:43.357 "ddgst": false, 00:22:43.357 "psk": "key0", 00:22:43.357 "allow_unrecognized_csi": false, 00:22:43.357 "method": "bdev_nvme_attach_controller", 00:22:43.357 "req_id": 1 00:22:43.357 } 00:22:43.357 Got JSON-RPC error response 00:22:43.357 response: 00:22:43.357 { 00:22:43.357 "code": -126, 00:22:43.357 "message": "Required key not available" 00:22:43.357 } 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1132480 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1132480 ']' 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1132480 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132480 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132480' 00:22:43.357 killing process with pid 1132480 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1132480 00:22:43.357 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.357 00:22:43.357 Latency(us) 00:22:43.357 [2024-10-14T15:39:42.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.357 [2024-10-14T15:39:42.495Z] =================================================================================================================== 00:22:43.357 [2024-10-14T15:39:42.495Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1132480 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1130390 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1130390 ']' 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1130390 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.357 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130390 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130390' 00:22:43.617 killing process with pid 1130390 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1130390 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1130390 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1132719 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1132719 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1132719 ']' 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.617 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 [2024-10-14 17:39:42.725719] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:43.617 [2024-10-14 17:39:42.725765] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.876 [2024-10-14 17:39:42.783299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.876 [2024-10-14 17:39:42.823681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.876 [2024-10-14 17:39:42.823715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.876 [2024-10-14 17:39:42.823723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.876 [2024-10-14 17:39:42.823729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.876 [2024-10-14 17:39:42.823734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.876 [2024-10-14 17:39:42.824276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.876 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.876 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:43.876 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sa0ojhDK0R 00:22:43.877 17:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.135 [2024-10-14 17:39:43.127165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.135 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.394 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.394 [2024-10-14 17:39:43.528188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.394 [2024-10-14 17:39:43.528380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.653 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.653 malloc0 00:22:44.653 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.911 17:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:45.170 [2024-10-14 17:39:44.129736] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Sa0ojhDK0R': 0100666 00:22:45.170 [2024-10-14 17:39:44.129758] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:45.170 request: 00:22:45.170 { 00:22:45.170 "name": "key0", 00:22:45.170 "path": "/tmp/tmp.Sa0ojhDK0R", 00:22:45.170 "method": "keyring_file_add_key", 00:22:45.170 "req_id": 1 00:22:45.170 } 00:22:45.170 Got JSON-RPC error response 00:22:45.170 response: 00:22:45.170 { 00:22:45.170 "code": -1, 00:22:45.170 "message": "Operation not permitted" 00:22:45.170 } 00:22:45.170 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:45.429 [2024-10-14 17:39:44.318250] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:45.429 [2024-10-14 17:39:44.318277] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:45.429 request: 00:22:45.429 { 00:22:45.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.429 "host": "nqn.2016-06.io.spdk:host1", 00:22:45.430 "psk": "key0", 00:22:45.430 "method": "nvmf_subsystem_add_host", 00:22:45.430 "req_id": 1 00:22:45.430 } 00:22:45.430 Got JSON-RPC error response 00:22:45.430 response: 00:22:45.430 { 00:22:45.430 "code": -32603, 00:22:45.430 "message": "Internal error" 00:22:45.430 } 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1132719 ']' 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132719' 00:22:45.430 killing process with pid 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1132719 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Sa0ojhDK0R 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1132987 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1132987 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1132987 ']' 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.430 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.689 [2024-10-14 17:39:44.594335] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:45.689 [2024-10-14 17:39:44.594382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.689 [2024-10-14 17:39:44.667235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.689 [2024-10-14 17:39:44.702962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.689 [2024-10-14 17:39:44.702998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.689 [2024-10-14 17:39:44.703005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.689 [2024-10-14 17:39:44.703011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.689 [2024-10-14 17:39:44.703015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.689 [2024-10-14 17:39:44.703575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.689 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.689 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:45.689 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:45.689 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.689 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.947 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.947 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:22:45.947 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sa0ojhDK0R 00:22:45.947 17:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.947 [2024-10-14 17:39:45.009976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.947 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.206 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.465 [2024-10-14 17:39:45.386934] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.465 [2024-10-14 17:39:45.387154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.465 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.465 malloc0 00:22:46.723 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.723 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:46.983 17:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1133241 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1133241 /var/tmp/bdevperf.sock 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1133241 ']' 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.242 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.242 [2024-10-14 17:39:46.235115] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:47.242 [2024-10-14 17:39:46.235167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133241 ] 00:22:47.242 [2024-10-14 17:39:46.304994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.242 [2024-10-14 17:39:46.345251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.501 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.501 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:47.501 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:22:47.760 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.760 [2024-10-14 17:39:46.816352] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.760 TLSTESTn1 00:22:48.018 17:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:48.278 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:48.278 "subsystems": [ 00:22:48.278 { 00:22:48.278 "subsystem": "keyring", 00:22:48.278 "config": [ 00:22:48.278 { 00:22:48.278 "method": "keyring_file_add_key", 00:22:48.278 "params": { 00:22:48.278 "name": "key0", 00:22:48.278 "path": "/tmp/tmp.Sa0ojhDK0R" 00:22:48.278 } 00:22:48.278 } 00:22:48.278 ] 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "subsystem": "iobuf", 00:22:48.278 "config": [ 00:22:48.278 { 00:22:48.278 "method": "iobuf_set_options", 00:22:48.278 "params": { 00:22:48.278 "small_pool_count": 8192, 00:22:48.278 "large_pool_count": 1024, 00:22:48.278 "small_bufsize": 8192, 00:22:48.278 "large_bufsize": 135168 00:22:48.278 } 00:22:48.278 } 00:22:48.278 ] 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "subsystem": "sock", 00:22:48.278 "config": [ 00:22:48.278 { 00:22:48.278 "method": "sock_set_default_impl", 00:22:48.278 "params": { 00:22:48.278 "impl_name": "posix" 00:22:48.278 } 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "method": "sock_impl_set_options", 00:22:48.278 "params": { 00:22:48.278 "impl_name": "ssl", 00:22:48.278 "recv_buf_size": 4096, 00:22:48.278 "send_buf_size": 4096, 00:22:48.278 "enable_recv_pipe": true, 00:22:48.278 "enable_quickack": false, 00:22:48.278 "enable_placement_id": 0, 00:22:48.278 "enable_zerocopy_send_server": true, 00:22:48.278 "enable_zerocopy_send_client": false, 00:22:48.278 "zerocopy_threshold": 0, 00:22:48.278 "tls_version": 0, 00:22:48.278 "enable_ktls": false 00:22:48.278 } 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "method": "sock_impl_set_options", 00:22:48.278 "params": { 00:22:48.278 "impl_name": "posix", 00:22:48.278 "recv_buf_size": 2097152, 00:22:48.278 "send_buf_size": 2097152, 00:22:48.278 "enable_recv_pipe": true, 00:22:48.278 "enable_quickack": false, 00:22:48.278 "enable_placement_id": 0, 00:22:48.278 "enable_zerocopy_send_server": true, 00:22:48.278 "enable_zerocopy_send_client": false, 00:22:48.278 "zerocopy_threshold": 0, 00:22:48.278 "tls_version": 0, 00:22:48.278 "enable_ktls": false 00:22:48.278 } 00:22:48.278 } 00:22:48.278 ] 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "subsystem": "vmd", 00:22:48.278 "config": [] 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "subsystem": "accel", 00:22:48.278 "config": [ 00:22:48.278 { 00:22:48.278 "method": "accel_set_options", 00:22:48.278 "params": { 00:22:48.278 "small_cache_size": 128, 00:22:48.278 "large_cache_size": 16, 00:22:48.278 "task_count": 2048, 00:22:48.278 "sequence_count": 2048, 00:22:48.278 "buf_count": 2048 00:22:48.278 } 00:22:48.278 } 00:22:48.278 ] 00:22:48.278 }, 00:22:48.278 { 00:22:48.278 "subsystem": "bdev", 00:22:48.278 "config": [ 00:22:48.279 { 00:22:48.279 "method": "bdev_set_options", 00:22:48.279 "params": { 00:22:48.279 "bdev_io_pool_size": 65535, 00:22:48.279 "bdev_io_cache_size": 256, 00:22:48.279 "bdev_auto_examine": true, 00:22:48.279 "iobuf_small_cache_size": 128, 00:22:48.279 "iobuf_large_cache_size": 16 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_raid_set_options", 00:22:48.279 "params": { 00:22:48.279 "process_window_size_kb": 1024, 00:22:48.279 "process_max_bandwidth_mb_sec": 0 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_iscsi_set_options", 00:22:48.279 "params": { 00:22:48.279 "timeout_sec": 30 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_nvme_set_options", 00:22:48.279 "params": { 00:22:48.279 "action_on_timeout": "none", 00:22:48.279 "timeout_us": 0, 00:22:48.279 "timeout_admin_us": 0, 00:22:48.279 "keep_alive_timeout_ms": 10000, 00:22:48.279 "arbitration_burst": 0, 00:22:48.279 "low_priority_weight": 0, 00:22:48.279 "medium_priority_weight": 0, 00:22:48.279 "high_priority_weight": 0, 00:22:48.279 "nvme_adminq_poll_period_us": 10000, 00:22:48.279 "nvme_ioq_poll_period_us": 0, 00:22:48.279 "io_queue_requests": 0, 00:22:48.279 "delay_cmd_submit": true, 00:22:48.279 "transport_retry_count": 4, 00:22:48.279 "bdev_retry_count": 3, 00:22:48.279 "transport_ack_timeout": 0, 00:22:48.279 "ctrlr_loss_timeout_sec": 0, 00:22:48.279 "reconnect_delay_sec": 0, 00:22:48.279 "fast_io_fail_timeout_sec": 0, 00:22:48.279 "disable_auto_failback": false, 00:22:48.279 "generate_uuids": false, 00:22:48.279 "transport_tos": 0, 00:22:48.279 "nvme_error_stat": false, 00:22:48.279 "rdma_srq_size": 0, 00:22:48.279 "io_path_stat": false, 00:22:48.279 "allow_accel_sequence": false, 00:22:48.279 "rdma_max_cq_size": 0, 00:22:48.279 "rdma_cm_event_timeout_ms": 0, 00:22:48.279 "dhchap_digests": [ 00:22:48.279 "sha256", 00:22:48.279 "sha384", 00:22:48.279 "sha512" 00:22:48.279 ], 00:22:48.279 "dhchap_dhgroups": [ 00:22:48.279 "null", 00:22:48.279 "ffdhe2048", 00:22:48.279 "ffdhe3072", 00:22:48.279 "ffdhe4096", 00:22:48.279 "ffdhe6144", 00:22:48.279 "ffdhe8192" 00:22:48.279 ] 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_nvme_set_hotplug", 00:22:48.279 "params": { 00:22:48.279 "period_us": 100000, 00:22:48.279 "enable": false 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_malloc_create", 00:22:48.279 "params": { 00:22:48.279 "name": "malloc0", 00:22:48.279 "num_blocks": 8192, 00:22:48.279 "block_size": 4096, 00:22:48.279 "physical_block_size": 4096, 00:22:48.279 "uuid": "7dd69ac7-8d09-47f5-bda5-3a4918f07bd4", 00:22:48.279 "optimal_io_boundary": 0, 00:22:48.279 "md_size": 0, 00:22:48.279 "dif_type": 0, 00:22:48.279 "dif_is_head_of_md": false, 00:22:48.279 "dif_pi_format": 0 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "bdev_wait_for_examine" 00:22:48.279 } 00:22:48.279 ] 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "subsystem": "nbd", 00:22:48.279 "config": [] 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "subsystem": "scheduler", 00:22:48.279 "config": [ 00:22:48.279 { 00:22:48.279 "method": "framework_set_scheduler", 00:22:48.279 "params": { 00:22:48.279 "name": "static" 00:22:48.279 } 00:22:48.279 } 00:22:48.279 ] 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "subsystem": "nvmf", 00:22:48.279 "config": [ 00:22:48.279 { 00:22:48.279 "method": "nvmf_set_config", 00:22:48.279 "params": { 00:22:48.279 "discovery_filter": "match_any", 00:22:48.279 "admin_cmd_passthru": { 00:22:48.279 "identify_ctrlr": false 00:22:48.279 }, 00:22:48.279 "dhchap_digests": [ 00:22:48.279 "sha256", 00:22:48.279 "sha384", 00:22:48.279 "sha512" 00:22:48.279 ], 00:22:48.279 "dhchap_dhgroups": [ 00:22:48.279 "null", 00:22:48.279 "ffdhe2048", 00:22:48.279 "ffdhe3072", 00:22:48.279 "ffdhe4096", 00:22:48.279 "ffdhe6144", 00:22:48.279 "ffdhe8192" 00:22:48.279 ] 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_set_max_subsystems", 00:22:48.279 "params": { 00:22:48.279 "max_subsystems": 1024 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_set_crdt", 00:22:48.279 "params": { 00:22:48.279 "crdt1": 0, 00:22:48.279 "crdt2": 0, 00:22:48.279 "crdt3": 0 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_create_transport", 00:22:48.279 "params": { 00:22:48.279 "trtype": "TCP", 00:22:48.279 "max_queue_depth": 128, 00:22:48.279 "max_io_qpairs_per_ctrlr": 127, 00:22:48.279 "in_capsule_data_size": 4096, 00:22:48.279 "max_io_size": 131072, 00:22:48.279 "io_unit_size": 131072, 00:22:48.279 "max_aq_depth": 128, 00:22:48.279 "num_shared_buffers": 511, 00:22:48.279 "buf_cache_size": 4294967295, 00:22:48.279 "dif_insert_or_strip": false, 00:22:48.279 "zcopy": false, 00:22:48.279 "c2h_success": false, 00:22:48.279 "sock_priority": 0, 00:22:48.279 "abort_timeout_sec": 1, 00:22:48.279 "ack_timeout": 0, 00:22:48.279 "data_wr_pool_size": 0 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_create_subsystem", 00:22:48.279 "params": { 00:22:48.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.279 "allow_any_host": false, 00:22:48.279 "serial_number": "SPDK00000000000001", 00:22:48.279 "model_number": "SPDK bdev Controller", 00:22:48.279 "max_namespaces": 10, 00:22:48.279 "min_cntlid": 1, 00:22:48.279 "max_cntlid": 65519, 00:22:48.279 "ana_reporting": false 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_subsystem_add_host", 00:22:48.279 "params": { 00:22:48.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.279 "host": "nqn.2016-06.io.spdk:host1", 00:22:48.279 "psk": "key0" 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_subsystem_add_ns", 00:22:48.279 "params": { 00:22:48.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.279 "namespace": { 00:22:48.279 "nsid": 1, 00:22:48.279 "bdev_name": "malloc0", 00:22:48.279 "nguid": "7DD69AC78D0947F5BDA53A4918F07BD4", 00:22:48.279 "uuid": "7dd69ac7-8d09-47f5-bda5-3a4918f07bd4", 00:22:48.279 "no_auto_visible": false 00:22:48.279 } 00:22:48.279 } 00:22:48.279 }, 00:22:48.279 { 00:22:48.279 "method": "nvmf_subsystem_add_listener", 00:22:48.279 "params": { 00:22:48.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.279 "listen_address": { 00:22:48.279 "trtype": "TCP", 00:22:48.279 "adrfam": "IPv4", 00:22:48.279 "traddr": "10.0.0.2", 00:22:48.279 "trsvcid": "4420" 00:22:48.279 }, 00:22:48.279 "secure_channel": true 00:22:48.279 } 00:22:48.279 } 00:22:48.279 ] 00:22:48.279 } 00:22:48.279 ] 00:22:48.279 }' 00:22:48.279 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:48.539 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:48.539 "subsystems": [ 00:22:48.539 { 00:22:48.539 "subsystem": "keyring", 00:22:48.539 "config": [ 00:22:48.539 { 00:22:48.539 "method": "keyring_file_add_key", 00:22:48.539 "params": { 00:22:48.539 "name": "key0", 00:22:48.539 "path": "/tmp/tmp.Sa0ojhDK0R" 00:22:48.539 } 00:22:48.539 } 00:22:48.539 ] 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "subsystem": "iobuf", 00:22:48.539 "config": [ 00:22:48.539 { 00:22:48.539 "method": "iobuf_set_options", 00:22:48.539 "params": { 00:22:48.539 "small_pool_count": 8192, 00:22:48.539 "large_pool_count": 1024, 00:22:48.539 "small_bufsize": 8192, 00:22:48.539 "large_bufsize": 135168 00:22:48.539 } 00:22:48.539 } 00:22:48.539 ] 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "subsystem": "sock", 00:22:48.539 "config": [ 00:22:48.539 { 00:22:48.539 "method": "sock_set_default_impl", 00:22:48.539 "params": { 00:22:48.539 "impl_name": "posix" 00:22:48.539 } 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "method": "sock_impl_set_options", 00:22:48.539 "params": { 00:22:48.539 "impl_name": "ssl", 00:22:48.539 "recv_buf_size": 4096, 00:22:48.539 "send_buf_size": 4096, 00:22:48.539 "enable_recv_pipe": true, 00:22:48.539 "enable_quickack": false, 00:22:48.539 "enable_placement_id": 0, 00:22:48.539 "enable_zerocopy_send_server": true, 00:22:48.539 "enable_zerocopy_send_client": false, 00:22:48.539 "zerocopy_threshold": 0, 00:22:48.539 "tls_version": 0, 00:22:48.539 "enable_ktls": false 00:22:48.539 } 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "method": "sock_impl_set_options", 00:22:48.539 "params": { 00:22:48.539 "impl_name": "posix", 00:22:48.539 "recv_buf_size": 2097152, 00:22:48.539 "send_buf_size": 2097152, 00:22:48.539 "enable_recv_pipe": true, 00:22:48.539 "enable_quickack": false, 00:22:48.539 "enable_placement_id": 0, 00:22:48.539 "enable_zerocopy_send_server": true, 00:22:48.539 "enable_zerocopy_send_client": false, 00:22:48.539 "zerocopy_threshold": 0, 00:22:48.539 "tls_version": 0, 00:22:48.539 "enable_ktls": false 00:22:48.539 } 00:22:48.539 } 00:22:48.539 ] 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "subsystem": "vmd", 00:22:48.539 "config": [] 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "subsystem": "accel", 00:22:48.539 "config": [ 00:22:48.539 { 00:22:48.539 "method": "accel_set_options", 00:22:48.539 "params": { 00:22:48.539 "small_cache_size": 128, 00:22:48.539 "large_cache_size": 16, 00:22:48.539 "task_count": 2048, 00:22:48.539 "sequence_count": 2048, 00:22:48.539 "buf_count": 2048 00:22:48.539 } 00:22:48.539 } 00:22:48.539 ] 00:22:48.539 }, 00:22:48.539 { 00:22:48.539 "subsystem": "bdev", 00:22:48.539 "config": [ 00:22:48.539 { 00:22:48.540 "method": "bdev_set_options", 00:22:48.540 "params": { 00:22:48.540 "bdev_io_pool_size": 65535, 00:22:48.540 "bdev_io_cache_size": 256, 00:22:48.540 "bdev_auto_examine": true, 00:22:48.540 "iobuf_small_cache_size": 128, 00:22:48.540 "iobuf_large_cache_size": 16 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_raid_set_options", 00:22:48.540 "params": { 00:22:48.540 "process_window_size_kb": 1024, 00:22:48.540 "process_max_bandwidth_mb_sec": 0 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_iscsi_set_options", 00:22:48.540 "params": { 00:22:48.540 "timeout_sec": 30 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_nvme_set_options", 00:22:48.540 "params": { 00:22:48.540 "action_on_timeout": "none", 00:22:48.540 "timeout_us": 0, 00:22:48.540 "timeout_admin_us": 0, 00:22:48.540 "keep_alive_timeout_ms": 10000, 00:22:48.540 "arbitration_burst": 0, 00:22:48.540 "low_priority_weight": 0, 00:22:48.540 "medium_priority_weight": 0, 00:22:48.540 "high_priority_weight": 0, 00:22:48.540 "nvme_adminq_poll_period_us": 10000, 00:22:48.540 "nvme_ioq_poll_period_us": 0, 00:22:48.540 "io_queue_requests": 512, 00:22:48.540 "delay_cmd_submit": true, 00:22:48.540 "transport_retry_count": 4, 00:22:48.540 "bdev_retry_count": 3, 00:22:48.540 "transport_ack_timeout": 0, 00:22:48.540 "ctrlr_loss_timeout_sec": 0, 00:22:48.540 "reconnect_delay_sec": 0, 00:22:48.540 "fast_io_fail_timeout_sec": 0, 00:22:48.540 "disable_auto_failback": false, 00:22:48.540 "generate_uuids": false, 00:22:48.540 "transport_tos": 0, 00:22:48.540 "nvme_error_stat": false, 00:22:48.540 "rdma_srq_size": 0, 00:22:48.540 "io_path_stat": false, 00:22:48.540 "allow_accel_sequence": false, 00:22:48.540 "rdma_max_cq_size": 0, 00:22:48.540 "rdma_cm_event_timeout_ms": 0, 00:22:48.540 "dhchap_digests": [ 00:22:48.540 "sha256", 00:22:48.540 "sha384", 00:22:48.540 "sha512" 00:22:48.540 ], 00:22:48.540 "dhchap_dhgroups": [ 00:22:48.540 "null", 00:22:48.540 "ffdhe2048", 00:22:48.540 "ffdhe3072", 00:22:48.540 "ffdhe4096", 00:22:48.540 "ffdhe6144", 00:22:48.540 "ffdhe8192" 00:22:48.540 ] 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_nvme_attach_controller", 00:22:48.540 "params": { 00:22:48.540 "name": "TLSTEST", 00:22:48.540 "trtype": "TCP", 00:22:48.540 "adrfam": "IPv4", 00:22:48.540 "traddr": "10.0.0.2", 00:22:48.540 "trsvcid": "4420", 00:22:48.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.540 "prchk_reftag": false, 00:22:48.540 "prchk_guard": false, 00:22:48.540 "ctrlr_loss_timeout_sec": 0, 00:22:48.540 "reconnect_delay_sec": 0, 00:22:48.540 "fast_io_fail_timeout_sec": 0, 00:22:48.540 "psk": "key0", 00:22:48.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.540 "hdgst": false, 00:22:48.540 "ddgst": false, 00:22:48.540 "multipath": "multipath" 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_nvme_set_hotplug", 00:22:48.540 "params": { 00:22:48.540 "period_us": 100000, 00:22:48.540 "enable": false 00:22:48.540 } 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "method": "bdev_wait_for_examine" 00:22:48.540 } 00:22:48.540 ] 00:22:48.540 }, 00:22:48.540 { 00:22:48.540 "subsystem": "nbd", 00:22:48.540 "config": [] 00:22:48.540 } 00:22:48.540 ] 00:22:48.540 }' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1133241 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1133241 ']' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1133241 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1133241 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1133241' 00:22:48.540 killing process with pid 1133241 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1133241 00:22:48.540 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.540 00:22:48.540 Latency(us) 00:22:48.540 [2024-10-14T15:39:47.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.540 [2024-10-14T15:39:47.678Z] =================================================================================================================== 00:22:48.540 [2024-10-14T15:39:47.678Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1133241 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1132987 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1132987 ']' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1132987 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.540 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132987 00:22:48.800 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132987' 00:22:48.801 killing process with pid 1132987 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1132987 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1132987 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.801 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:48.801 "subsystems": [ 00:22:48.801 { 00:22:48.801 "subsystem": "keyring", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "keyring_file_add_key", 00:22:48.801 "params": { 00:22:48.801 "name": "key0", 00:22:48.801 "path": "/tmp/tmp.Sa0ojhDK0R" 00:22:48.801 } 00:22:48.801 } 00:22:48.801 ] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "iobuf", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "iobuf_set_options", 00:22:48.801 "params": { 00:22:48.801 "small_pool_count": 8192, 00:22:48.801 "large_pool_count": 1024, 00:22:48.801 "small_bufsize": 8192, 00:22:48.801 "large_bufsize": 135168 00:22:48.801 } 00:22:48.801 } 00:22:48.801 ] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "sock", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "sock_set_default_impl", 00:22:48.801 "params": { 00:22:48.801 "impl_name": "posix" 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "sock_impl_set_options", 00:22:48.801 "params": { 00:22:48.801 "impl_name": "ssl", 00:22:48.801 "recv_buf_size": 4096, 00:22:48.801 "send_buf_size": 4096, 00:22:48.801 "enable_recv_pipe": true, 00:22:48.801 "enable_quickack": false, 00:22:48.801 "enable_placement_id": 0, 00:22:48.801 "enable_zerocopy_send_server": true, 00:22:48.801 "enable_zerocopy_send_client": false, 00:22:48.801 "zerocopy_threshold": 0, 00:22:48.801 "tls_version": 0, 00:22:48.801 "enable_ktls": false 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "sock_impl_set_options", 00:22:48.801 "params": { 00:22:48.801 "impl_name": "posix", 00:22:48.801 "recv_buf_size": 2097152, 00:22:48.801 "send_buf_size": 2097152, 00:22:48.801 "enable_recv_pipe": true, 00:22:48.801 "enable_quickack": false, 00:22:48.801 "enable_placement_id": 0, 00:22:48.801 "enable_zerocopy_send_server": true, 00:22:48.801 "enable_zerocopy_send_client": false, 00:22:48.801 "zerocopy_threshold": 0, 00:22:48.801 "tls_version": 0, 00:22:48.801 "enable_ktls": false 00:22:48.801 } 00:22:48.801 } 00:22:48.801 ] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "vmd", 00:22:48.801 "config": [] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "accel", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "accel_set_options", 00:22:48.801 "params": { 00:22:48.801 "small_cache_size": 128, 00:22:48.801 "large_cache_size": 16, 00:22:48.801 "task_count": 2048, 00:22:48.801 "sequence_count": 2048, 00:22:48.801 "buf_count": 2048 00:22:48.801 } 00:22:48.801 } 00:22:48.801 ] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "bdev", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "bdev_set_options", 00:22:48.801 "params": { 00:22:48.801 "bdev_io_pool_size": 65535, 00:22:48.801 "bdev_io_cache_size": 256, 00:22:48.801 "bdev_auto_examine": true, 00:22:48.801 "iobuf_small_cache_size": 128, 00:22:48.801 "iobuf_large_cache_size": 16 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_raid_set_options", 00:22:48.801 "params": { 00:22:48.801 "process_window_size_kb": 1024, 00:22:48.801 "process_max_bandwidth_mb_sec": 0 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_iscsi_set_options", 00:22:48.801 "params": { 00:22:48.801 "timeout_sec": 30 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_nvme_set_options", 00:22:48.801 "params": { 00:22:48.801 "action_on_timeout": "none", 00:22:48.801 "timeout_us": 0, 00:22:48.801 "timeout_admin_us": 0, 00:22:48.801 "keep_alive_timeout_ms": 10000, 00:22:48.801 "arbitration_burst": 0, 00:22:48.801 "low_priority_weight": 0, 00:22:48.801 "medium_priority_weight": 0, 00:22:48.801 "high_priority_weight": 0, 00:22:48.801 "nvme_adminq_poll_period_us": 10000, 00:22:48.801 "nvme_ioq_poll_period_us": 0, 00:22:48.801 "io_queue_requests": 0, 00:22:48.801 "delay_cmd_submit": true, 00:22:48.801 "transport_retry_count": 4, 00:22:48.801 "bdev_retry_count": 3, 00:22:48.801 "transport_ack_timeout": 0, 00:22:48.801 "ctrlr_loss_timeout_sec": 0, 00:22:48.801 "reconnect_delay_sec": 0, 00:22:48.801 "fast_io_fail_timeout_sec": 0, 00:22:48.801 "disable_auto_failback": false, 00:22:48.801 "generate_uuids": false, 00:22:48.801 "transport_tos": 0, 00:22:48.801 "nvme_error_stat": false, 00:22:48.801 "rdma_srq_size": 0, 00:22:48.801 "io_path_stat": false, 00:22:48.801 "allow_accel_sequence": false, 00:22:48.801 "rdma_max_cq_size": 0, 00:22:48.801 "rdma_cm_event_timeout_ms": 0, 00:22:48.801 "dhchap_digests": [ 00:22:48.801 "sha256", 00:22:48.801 "sha384", 00:22:48.801 "sha512" 00:22:48.801 ], 00:22:48.801 "dhchap_dhgroups": [ 00:22:48.801 "null", 00:22:48.801 "ffdhe2048", 00:22:48.801 "ffdhe3072", 00:22:48.801 "ffdhe4096", 00:22:48.801 "ffdhe6144", 00:22:48.801 "ffdhe8192" 00:22:48.801 ] 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_nvme_set_hotplug", 00:22:48.801 "params": { 00:22:48.801 "period_us": 100000, 00:22:48.801 "enable": false 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_malloc_create", 00:22:48.801 "params": { 00:22:48.801 "name": "malloc0", 00:22:48.801 "num_blocks": 8192, 00:22:48.801 "block_size": 4096, 00:22:48.801 "physical_block_size": 4096, 00:22:48.801 "uuid": "7dd69ac7-8d09-47f5-bda5-3a4918f07bd4", 00:22:48.801 "optimal_io_boundary": 0, 00:22:48.801 "md_size": 0, 00:22:48.801 "dif_type": 0, 00:22:48.801 "dif_is_head_of_md": false, 00:22:48.801 "dif_pi_format": 0 00:22:48.801 } 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "method": "bdev_wait_for_examine" 00:22:48.801 } 00:22:48.801 ] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "nbd", 00:22:48.801 "config": [] 00:22:48.801 }, 00:22:48.801 { 00:22:48.801 "subsystem": "scheduler", 00:22:48.801 "config": [ 00:22:48.801 { 00:22:48.801 "method": "framework_set_scheduler", 00:22:48.801 "params": { 00:22:48.801 "name": "static" 00:22:48.801 } 00:22:48.801 } 00:22:48.801 ] 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "subsystem": "nvmf", 00:22:48.802 "config": [ 00:22:48.802 { 00:22:48.802 "method": "nvmf_set_config", 00:22:48.802 "params": { 00:22:48.802 "discovery_filter": "match_any", 00:22:48.802 "admin_cmd_passthru": { 00:22:48.802 "identify_ctrlr": false 00:22:48.802 }, 00:22:48.802 "dhchap_digests": [ 00:22:48.802 "sha256", 00:22:48.802 "sha384", 00:22:48.802 "sha512" 00:22:48.802 ], 00:22:48.802 "dhchap_dhgroups": [ 00:22:48.802 "null", 00:22:48.802 "ffdhe2048", 00:22:48.802 "ffdhe3072", 00:22:48.802 "ffdhe4096", 00:22:48.802 "ffdhe6144", 00:22:48.802 "ffdhe8192" 00:22:48.802 ] 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_set_max_subsystems", 00:22:48.802 "params": { 00:22:48.802 "max_subsystems": 1024 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_set_crdt", 00:22:48.802 "params": { 00:22:48.802 "crdt1": 0, 00:22:48.802 "crdt2": 0, 00:22:48.802 "crdt3": 0 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_create_transport", 00:22:48.802 "params": { 00:22:48.802 "trtype": "TCP", 00:22:48.802 "max_queue_depth": 128, 00:22:48.802 "max_io_qpairs_per_ctrlr": 127, 00:22:48.802 "in_capsule_data_size": 4096, 00:22:48.802 "max_io_size": 131072, 00:22:48.802 "io_unit_size": 131072, 00:22:48.802 "max_aq_depth": 128, 00:22:48.802 "num_shared_buffers": 511, 00:22:48.802 "buf_cache_size": 4294967295, 00:22:48.802 "dif_insert_or_strip": false, 00:22:48.802 "zcopy": false, 00:22:48.802 "c2h_success": false, 00:22:48.802 "sock_priority": 0, 00:22:48.802 "abort_timeout_sec": 1, 00:22:48.802 "ack_timeout": 0, 00:22:48.802 "data_wr_pool_size": 0 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_create_subsystem", 00:22:48.802 "params": { 00:22:48.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.802 "allow_any_host": false, 00:22:48.802 "serial_number": "SPDK00000000000001", 00:22:48.802 "model_number": "SPDK bdev Controller", 00:22:48.802 "max_namespaces": 10, 00:22:48.802 "min_cntlid": 1, 00:22:48.802 "max_cntlid": 65519, 00:22:48.802 "ana_reporting": false 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_subsystem_add_host", 00:22:48.802 "params": { 00:22:48.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.802 "host": "nqn.2016-06.io.spdk:host1", 00:22:48.802 "psk": "key0" 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_subsystem_add_ns", 00:22:48.802 "params": { 00:22:48.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.802 "namespace": { 00:22:48.802 "nsid": 1, 00:22:48.802 "bdev_name": "malloc0", 00:22:48.802 "nguid": "7DD69AC78D0947F5BDA53A4918F07BD4", 00:22:48.802 "uuid": "7dd69ac7-8d09-47f5-bda5-3a4918f07bd4", 00:22:48.802 "no_auto_visible": false 00:22:48.802 } 00:22:48.802 } 00:22:48.802 }, 00:22:48.802 { 00:22:48.802 "method": "nvmf_subsystem_add_listener", 00:22:48.802 "params": { 00:22:48.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.802 "listen_address": { 00:22:48.802 "trtype": "TCP", 00:22:48.802 "adrfam": "IPv4", 00:22:48.802 "traddr": "10.0.0.2", 00:22:48.802 "trsvcid": "4420" 00:22:48.802 }, 00:22:48.802 "secure_channel": true 00:22:48.802 } 00:22:48.802 } 00:22:48.802 ] 00:22:48.802 } 00:22:48.802 ] 00:22:48.802 }' 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1133629 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1133629 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1133629 ']' 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.802 17:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.802 [2024-10-14 17:39:47.928509] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:48.802 [2024-10-14 17:39:47.928559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.061 [2024-10-14 17:39:48.000077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.061 [2024-10-14 17:39:48.037544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.061 [2024-10-14 17:39:48.037579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.061 [2024-10-14 17:39:48.037586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.061 [2024-10-14 17:39:48.037592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.061 [2024-10-14 17:39:48.037597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.061 [2024-10-14 17:39:48.038184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.320 [2024-10-14 17:39:48.250511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.320 [2024-10-14 17:39:48.282548] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:49.320 [2024-10-14 17:39:48.282763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1133743 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1133743 /var/tmp/bdevperf.sock 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1133743 ']' 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.890 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:49.890 "subsystems": [ 00:22:49.890 { 00:22:49.890 "subsystem": "keyring", 00:22:49.890 "config": [ 00:22:49.890 { 00:22:49.890 "method": "keyring_file_add_key", 00:22:49.890 "params": { 00:22:49.890 "name": "key0", 00:22:49.890 "path": "/tmp/tmp.Sa0ojhDK0R" 00:22:49.890 } 00:22:49.890 } 00:22:49.890 ] 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "subsystem": "iobuf", 00:22:49.890 "config": [ 00:22:49.890 { 00:22:49.890 "method": "iobuf_set_options", 00:22:49.890 "params": { 00:22:49.890 "small_pool_count": 8192, 00:22:49.890 "large_pool_count": 1024, 00:22:49.890 "small_bufsize": 8192, 00:22:49.890 "large_bufsize": 135168 00:22:49.890 } 00:22:49.890 } 00:22:49.890 ] 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "subsystem": "sock", 00:22:49.890 "config": [ 00:22:49.890 { 00:22:49.890 "method": "sock_set_default_impl", 00:22:49.890 "params": { 00:22:49.890 "impl_name": "posix" 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "method": "sock_impl_set_options", 00:22:49.890 "params": { 00:22:49.890 "impl_name": "ssl", 00:22:49.890 "recv_buf_size": 4096, 00:22:49.890 "send_buf_size": 4096, 00:22:49.890 "enable_recv_pipe": true, 00:22:49.890 "enable_quickack": false, 00:22:49.890 "enable_placement_id": 0, 00:22:49.890 "enable_zerocopy_send_server": true, 00:22:49.890 "enable_zerocopy_send_client": false, 00:22:49.890 "zerocopy_threshold": 0, 00:22:49.890 "tls_version": 0, 00:22:49.890 "enable_ktls": false 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "method": "sock_impl_set_options", 00:22:49.890 "params": { 00:22:49.890 "impl_name": "posix", 00:22:49.890 "recv_buf_size": 2097152, 00:22:49.890 "send_buf_size": 2097152, 00:22:49.890 "enable_recv_pipe": true, 00:22:49.890 "enable_quickack": false, 00:22:49.890 "enable_placement_id": 0, 00:22:49.890 "enable_zerocopy_send_server": true, 00:22:49.890 "enable_zerocopy_send_client": false, 00:22:49.890 "zerocopy_threshold": 0, 00:22:49.890 "tls_version": 0, 00:22:49.890 "enable_ktls": false 00:22:49.890 } 00:22:49.890 } 00:22:49.890 ] 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "subsystem": "vmd", 00:22:49.890 "config": [] 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "subsystem": "accel", 00:22:49.890 "config": [ 00:22:49.890 { 00:22:49.890 "method": "accel_set_options", 00:22:49.890 "params": { 00:22:49.890 "small_cache_size": 128, 00:22:49.890 "large_cache_size": 16, 00:22:49.890 "task_count": 2048, 00:22:49.890 "sequence_count": 2048, 00:22:49.890 "buf_count": 2048 00:22:49.890 } 00:22:49.890 } 00:22:49.890 ] 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "subsystem": "bdev", 00:22:49.890 "config": [ 00:22:49.890 { 00:22:49.890 "method": "bdev_set_options", 00:22:49.890 "params": { 00:22:49.890 "bdev_io_pool_size": 65535, 00:22:49.890 "bdev_io_cache_size": 256, 00:22:49.890 "bdev_auto_examine": true, 00:22:49.890 "iobuf_small_cache_size": 128, 00:22:49.890 "iobuf_large_cache_size": 16 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "method": "bdev_raid_set_options", 00:22:49.890 "params": { 00:22:49.890 "process_window_size_kb": 1024, 00:22:49.890 "process_max_bandwidth_mb_sec": 0 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "method": "bdev_iscsi_set_options", 00:22:49.890 "params": { 00:22:49.890 "timeout_sec": 30 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.890 "method": "bdev_nvme_set_options", 00:22:49.890 "params": { 00:22:49.890 "action_on_timeout": "none", 00:22:49.890 "timeout_us": 0, 00:22:49.890 "timeout_admin_us": 0, 00:22:49.890 "keep_alive_timeout_ms": 10000, 00:22:49.890 "arbitration_burst": 0, 00:22:49.890 "low_priority_weight": 0, 00:22:49.890 "medium_priority_weight": 0, 00:22:49.890 "high_priority_weight": 0, 00:22:49.890 "nvme_adminq_poll_period_us": 10000, 00:22:49.890 "nvme_ioq_poll_period_us": 0, 00:22:49.890 "io_queue_requests": 512, 00:22:49.890 "delay_cmd_submit": true, 00:22:49.890 "transport_retry_count": 4, 00:22:49.890 "bdev_retry_count": 3, 00:22:49.890 "transport_ack_timeout": 0, 00:22:49.890 "ctrlr_loss_timeout_sec": 0, 00:22:49.890 "reconnect_delay_sec": 0, 00:22:49.890 "fast_io_fail_timeout_sec": 0, 00:22:49.890 "disable_auto_failback": false, 00:22:49.890 "generate_uuids": false, 00:22:49.890 "transport_tos": 0, 00:22:49.890 "nvme_error_stat": false, 00:22:49.890 "rdma_srq_size": 0, 00:22:49.890 "io_path_stat": false, 00:22:49.890 "allow_accel_sequence": false, 00:22:49.890 "rdma_max_cq_size": 0, 00:22:49.890 "rdma_cm_event_timeout_ms": 0, 00:22:49.890 "dhchap_digests": [ 00:22:49.890 "sha256", 00:22:49.890 "sha384", 00:22:49.890 "sha512" 00:22:49.890 ], 00:22:49.890 "dhchap_dhgroups": [ 00:22:49.890 "null", 00:22:49.890 "ffdhe2048", 00:22:49.890 "ffdhe3072", 00:22:49.890 "ffdhe4096", 00:22:49.890 "ffdhe6144", 00:22:49.890 "ffdhe8192" 00:22:49.890 ] 00:22:49.890 } 00:22:49.890 }, 00:22:49.890 { 00:22:49.891 "method": "bdev_nvme_attach_controller", 00:22:49.891 "params": { 00:22:49.891 "name": "TLSTEST", 00:22:49.891 "trtype": "TCP", 00:22:49.891 "adrfam": "IPv4", 00:22:49.891 "traddr": "10.0.0.2", 00:22:49.891 "trsvcid": "4420", 00:22:49.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.891 "prchk_reftag": false, 00:22:49.891 "prchk_guard": false, 00:22:49.891 "ctrlr_loss_timeout_sec": 0, 00:22:49.891 "reconnect_delay_sec": 0, 00:22:49.891 "fast_io_fail_timeout_sec": 0, 00:22:49.891 "psk": "key0", 00:22:49.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.891 "hdgst": false, 00:22:49.891 "ddgst": false, 00:22:49.891 "multipath": "multipath" 00:22:49.891 } 00:22:49.891 }, 00:22:49.891 { 00:22:49.891 "method": "bdev_nvme_set_hotplug", 00:22:49.891 "params": { 00:22:49.891 "period_us": 100000, 00:22:49.891 "enable": false 00:22:49.891 } 00:22:49.891 }, 00:22:49.891 { 00:22:49.891 "method": "bdev_wait_for_examine" 00:22:49.891 } 00:22:49.891 ] 00:22:49.891 }, 00:22:49.891 { 00:22:49.891 "subsystem": "nbd", 00:22:49.891 "config": [] 00:22:49.891 } 00:22:49.891 ] 00:22:49.891 }' 00:22:49.891 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.891 17:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.891 [2024-10-14 17:39:48.828613] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:22:49.891 [2024-10-14 17:39:48.828663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133743 ] 00:22:49.891 [2024-10-14 17:39:48.895018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.891 [2024-10-14 17:39:48.935199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.150 [2024-10-14 17:39:49.086361] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.716 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.716 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:50.716 17:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:50.716 Running I/O for 10 seconds... 00:22:53.031 5407.00 IOPS, 21.12 MiB/s [2024-10-14T15:39:52.736Z] 5406.50 IOPS, 21.12 MiB/s [2024-10-14T15:39:54.114Z] 5487.00 IOPS, 21.43 MiB/s [2024-10-14T15:39:54.833Z] 5499.50 IOPS, 21.48 MiB/s [2024-10-14T15:39:55.770Z] 5534.00 IOPS, 21.62 MiB/s [2024-10-14T15:39:57.147Z] 5542.17 IOPS, 21.65 MiB/s [2024-10-14T15:39:58.083Z] 5548.14 IOPS, 21.67 MiB/s [2024-10-14T15:39:59.020Z] 5558.00 IOPS, 21.71 MiB/s [2024-10-14T15:39:59.956Z] 5562.33 IOPS, 21.73 MiB/s [2024-10-14T15:39:59.956Z] 5575.00 IOPS, 21.78 MiB/s 00:23:00.818 Latency(us) 00:23:00.818 [2024-10-14T15:39:59.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.818 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:00.818 Verification LBA range: start 0x0 length 0x2000 00:23:00.818 TLSTESTn1 : 10.02 5577.63 21.79 0.00 0.00 22914.03 6116.69 27587.54 00:23:00.818 [2024-10-14T15:39:59.956Z] =================================================================================================================== 00:23:00.818 [2024-10-14T15:39:59.956Z] Total : 5577.63 21.79 0.00 0.00 22914.03 6116.69 27587.54 00:23:00.818 { 00:23:00.818 "results": [ 00:23:00.818 { 00:23:00.818 "job": "TLSTESTn1", 00:23:00.818 "core_mask": "0x4", 00:23:00.818 "workload": "verify", 00:23:00.818 "status": "finished", 00:23:00.818 "verify_range": { 00:23:00.818 "start": 0, 00:23:00.818 "length": 8192 00:23:00.818 }, 00:23:00.818 "queue_depth": 128, 00:23:00.818 "io_size": 4096, 00:23:00.818 "runtime": 10.018051, 00:23:00.818 "iops": 5577.63181680748, 00:23:00.818 "mibps": 21.78762428440422, 00:23:00.818 "io_failed": 0, 00:23:00.818 "io_timeout": 0, 00:23:00.818 "avg_latency_us": 22914.027990663166, 00:23:00.818 "min_latency_us": 6116.693333333334, 00:23:00.818 "max_latency_us": 27587.53523809524 00:23:00.818 } 00:23:00.818 ], 00:23:00.818 "core_count": 1 00:23:00.818 } 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1133743 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1133743 ']' 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1133743 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1133743 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1133743' 00:23:00.818 killing process with pid 1133743 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1133743 00:23:00.818 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.818 00:23:00.818 Latency(us) 00:23:00.818 [2024-10-14T15:39:59.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.818 [2024-10-14T15:39:59.956Z] =================================================================================================================== 00:23:00.818 [2024-10-14T15:39:59.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.818 17:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1133743 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1133629 ']' 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1133629' 00:23:01.077 killing process with pid 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1133629 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:01.077 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1135589 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1135589 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1135589 ']' 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.336 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.336 [2024-10-14 17:40:00.271415] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:01.336 [2024-10-14 17:40:00.271466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.336 [2024-10-14 17:40:00.345077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.336 [2024-10-14 17:40:00.382076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.336 [2024-10-14 17:40:00.382110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.336 [2024-10-14 17:40:00.382117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.336 [2024-10-14 17:40:00.382123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.336 [2024-10-14 17:40:00.382127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.336 [2024-10-14 17:40:00.382711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Sa0ojhDK0R 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sa0ojhDK0R 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.594 [2024-10-14 17:40:00.698451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.594 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.853 17:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:02.112 [2024-10-14 17:40:01.095450] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.112 [2024-10-14 17:40:01.095673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.112 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:02.370 malloc0 00:23:02.370 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.370 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:23:02.629 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1135854 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1135854 /var/tmp/bdevperf.sock 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1135854 ']' 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.888 17:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.888 [2024-10-14 17:40:01.929867] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:02.888 [2024-10-14 17:40:01.929916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135854 ] 00:23:02.888 [2024-10-14 17:40:01.997392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.146 [2024-10-14 17:40:02.038571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.146 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.146 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:03.146 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:23:03.404 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:03.404 [2024-10-14 17:40:02.505535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.663 nvme0n1 00:23:03.663 17:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.663 Running I/O for 1 seconds... 00:23:04.599 5414.00 IOPS, 21.15 MiB/s 00:23:04.599 Latency(us) 00:23:04.599 [2024-10-14T15:40:03.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.599 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:04.599 Verification LBA range: start 0x0 length 0x2000 00:23:04.599 nvme0n1 : 1.02 5459.58 21.33 0.00 0.00 23280.53 4899.60 31332.45 00:23:04.599 [2024-10-14T15:40:03.737Z] =================================================================================================================== 00:23:04.599 [2024-10-14T15:40:03.737Z] Total : 5459.58 21.33 0.00 0.00 23280.53 4899.60 31332.45 00:23:04.599 { 00:23:04.599 "results": [ 00:23:04.599 { 00:23:04.599 "job": "nvme0n1", 00:23:04.599 "core_mask": "0x2", 00:23:04.599 "workload": "verify", 00:23:04.599 "status": "finished", 00:23:04.599 "verify_range": { 00:23:04.599 "start": 0, 00:23:04.599 "length": 8192 00:23:04.599 }, 00:23:04.599 "queue_depth": 128, 00:23:04.599 "io_size": 4096, 00:23:04.599 "runtime": 1.015097, 00:23:04.599 "iops": 5459.576769510697, 00:23:04.599 "mibps": 21.32647175590116, 00:23:04.599 "io_failed": 0, 00:23:04.599 "io_timeout": 0, 00:23:04.599 "avg_latency_us": 23280.526233953704, 00:23:04.599 "min_latency_us": 4899.596190476191, 00:23:04.599 "max_latency_us": 31332.449523809522 00:23:04.599 } 00:23:04.599 ], 00:23:04.599 "core_count": 1 00:23:04.599 } 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1135854 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1135854 ']' 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1135854 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.599 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1135854 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1135854' 00:23:04.859 killing process with pid 1135854 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1135854 00:23:04.859 Received shutdown signal, test time was about 1.000000 seconds 00:23:04.859 00:23:04.859 Latency(us) 00:23:04.859 [2024-10-14T15:40:03.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.859 [2024-10-14T15:40:03.997Z] =================================================================================================================== 00:23:04.859 [2024-10-14T15:40:03.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1135854 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1135589 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1135589 ']' 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1135589 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1135589 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1135589' 00:23:04.859 killing process with pid 1135589 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1135589 00:23:04.859 17:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1135589 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1136314 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1136314 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1136314 ']' 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.118 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.118 [2024-10-14 17:40:04.210159] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:05.118 [2024-10-14 17:40:04.210206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.378 [2024-10-14 17:40:04.280473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.378 [2024-10-14 17:40:04.315916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.378 [2024-10-14 17:40:04.315950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.378 [2024-10-14 17:40:04.315956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.378 [2024-10-14 17:40:04.315963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.378 [2024-10-14 17:40:04.315969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.378 [2024-10-14 17:40:04.316518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.378 [2024-10-14 17:40:04.458507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.378 malloc0 00:23:05.378 [2024-10-14 17:40:04.486611] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.378 [2024-10-14 17:40:04.486848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1136333 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1136333 /var/tmp/bdevperf.sock 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1136333 ']' 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.378 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.637 [2024-10-14 17:40:04.561139] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:05.637 [2024-10-14 17:40:04.561180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136333 ] 00:23:05.637 [2024-10-14 17:40:04.625011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.637 [2024-10-14 17:40:04.667546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:05.637 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sa0ojhDK0R 00:23:05.896 17:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:06.155 [2024-10-14 17:40:05.115489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.155 nvme0n1 00:23:06.155 17:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.155 Running I/O for 1 seconds... 00:23:07.534 5430.00 IOPS, 21.21 MiB/s 00:23:07.534 Latency(us) 00:23:07.534 [2024-10-14T15:40:06.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.534 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:07.534 Verification LBA range: start 0x0 length 0x2000 00:23:07.534 nvme0n1 : 1.02 5462.46 21.34 0.00 0.00 23258.86 6054.28 25215.76 00:23:07.534 [2024-10-14T15:40:06.672Z] =================================================================================================================== 00:23:07.534 [2024-10-14T15:40:06.672Z] Total : 5462.46 21.34 0.00 0.00 23258.86 6054.28 25215.76 00:23:07.534 { 00:23:07.534 "results": [ 00:23:07.534 { 00:23:07.534 "job": "nvme0n1", 00:23:07.534 "core_mask": "0x2", 00:23:07.534 "workload": "verify", 00:23:07.534 "status": "finished", 00:23:07.534 "verify_range": { 00:23:07.534 "start": 0, 00:23:07.534 "length": 8192 00:23:07.534 }, 00:23:07.534 "queue_depth": 128, 00:23:07.534 "io_size": 4096, 00:23:07.534 "runtime": 1.017674, 00:23:07.534 "iops": 5462.456543057993, 00:23:07.534 "mibps": 21.337720871320286, 00:23:07.534 "io_failed": 0, 00:23:07.534 "io_timeout": 0, 00:23:07.534 "avg_latency_us": 23258.86332279701, 00:23:07.534 "min_latency_us": 6054.278095238095, 00:23:07.534 "max_latency_us": 25215.75619047619 00:23:07.534 } 00:23:07.534 ], 00:23:07.534 "core_count": 1 00:23:07.534 } 00:23:07.534 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:07.534 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.534 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.534 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.534 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:07.534 "subsystems": [ 00:23:07.534 { 00:23:07.534 "subsystem": "keyring", 00:23:07.534 "config": [ 00:23:07.534 { 00:23:07.534 "method": "keyring_file_add_key", 00:23:07.534 "params": { 00:23:07.534 "name": "key0", 00:23:07.534 "path": "/tmp/tmp.Sa0ojhDK0R" 00:23:07.534 } 00:23:07.534 } 00:23:07.534 ] 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "subsystem": "iobuf", 00:23:07.534 "config": [ 00:23:07.534 { 00:23:07.534 "method": "iobuf_set_options", 00:23:07.534 "params": { 00:23:07.534 "small_pool_count": 8192, 00:23:07.534 "large_pool_count": 1024, 00:23:07.534 "small_bufsize": 8192, 00:23:07.534 "large_bufsize": 135168 00:23:07.534 } 00:23:07.534 } 00:23:07.534 ] 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "subsystem": "sock", 00:23:07.534 "config": [ 00:23:07.534 { 00:23:07.534 "method": "sock_set_default_impl", 00:23:07.534 "params": { 00:23:07.534 "impl_name": "posix" 00:23:07.534 } 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "method": "sock_impl_set_options", 00:23:07.534 "params": { 00:23:07.534 "impl_name": "ssl", 00:23:07.534 "recv_buf_size": 4096, 00:23:07.534 "send_buf_size": 4096, 00:23:07.534 "enable_recv_pipe": true, 00:23:07.534 "enable_quickack": false, 00:23:07.534 "enable_placement_id": 0, 00:23:07.534 "enable_zerocopy_send_server": true, 00:23:07.534 "enable_zerocopy_send_client": false, 00:23:07.534 "zerocopy_threshold": 0, 00:23:07.534 "tls_version": 0, 00:23:07.534 "enable_ktls": false 00:23:07.534 } 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "method": "sock_impl_set_options", 00:23:07.534 "params": { 00:23:07.534 "impl_name": "posix", 00:23:07.534 "recv_buf_size": 2097152, 00:23:07.534 "send_buf_size": 2097152, 00:23:07.534 "enable_recv_pipe": true, 00:23:07.534 "enable_quickack": false, 00:23:07.534 "enable_placement_id": 0, 00:23:07.534 "enable_zerocopy_send_server": true, 00:23:07.534 "enable_zerocopy_send_client": false, 00:23:07.534 "zerocopy_threshold": 0, 00:23:07.534 "tls_version": 0, 00:23:07.534 "enable_ktls": false 00:23:07.534 } 00:23:07.534 } 00:23:07.534 ] 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "subsystem": "vmd", 00:23:07.534 "config": [] 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "subsystem": "accel", 00:23:07.534 "config": [ 00:23:07.534 { 00:23:07.534 "method": "accel_set_options", 00:23:07.534 "params": { 00:23:07.534 "small_cache_size": 128, 00:23:07.534 "large_cache_size": 16, 00:23:07.534 "task_count": 2048, 00:23:07.534 "sequence_count": 2048, 00:23:07.534 "buf_count": 2048 00:23:07.534 } 00:23:07.534 } 00:23:07.534 ] 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "subsystem": "bdev", 00:23:07.534 "config": [ 00:23:07.534 { 00:23:07.534 "method": "bdev_set_options", 00:23:07.534 "params": { 00:23:07.534 "bdev_io_pool_size": 65535, 00:23:07.534 "bdev_io_cache_size": 256, 00:23:07.534 "bdev_auto_examine": true, 00:23:07.534 "iobuf_small_cache_size": 128, 00:23:07.534 "iobuf_large_cache_size": 16 00:23:07.534 } 00:23:07.534 }, 00:23:07.534 { 00:23:07.534 "method": "bdev_raid_set_options", 00:23:07.534 "params": { 00:23:07.535 "process_window_size_kb": 1024, 00:23:07.535 "process_max_bandwidth_mb_sec": 0 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "bdev_iscsi_set_options", 00:23:07.535 "params": { 00:23:07.535 "timeout_sec": 30 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "bdev_nvme_set_options", 00:23:07.535 "params": { 00:23:07.535 "action_on_timeout": "none", 00:23:07.535 "timeout_us": 0, 00:23:07.535 "timeout_admin_us": 0, 00:23:07.535 "keep_alive_timeout_ms": 10000, 00:23:07.535 "arbitration_burst": 0, 00:23:07.535 "low_priority_weight": 0, 00:23:07.535 "medium_priority_weight": 0, 00:23:07.535 "high_priority_weight": 0, 00:23:07.535 "nvme_adminq_poll_period_us": 10000, 00:23:07.535 "nvme_ioq_poll_period_us": 0, 00:23:07.535 "io_queue_requests": 0, 00:23:07.535 "delay_cmd_submit": true, 00:23:07.535 "transport_retry_count": 4, 00:23:07.535 "bdev_retry_count": 3, 00:23:07.535 "transport_ack_timeout": 0, 00:23:07.535 "ctrlr_loss_timeout_sec": 0, 00:23:07.535 "reconnect_delay_sec": 0, 00:23:07.535 "fast_io_fail_timeout_sec": 0, 00:23:07.535 "disable_auto_failback": false, 00:23:07.535 "generate_uuids": false, 00:23:07.535 "transport_tos": 0, 00:23:07.535 "nvme_error_stat": false, 00:23:07.535 "rdma_srq_size": 0, 00:23:07.535 "io_path_stat": false, 00:23:07.535 "allow_accel_sequence": false, 00:23:07.535 "rdma_max_cq_size": 0, 00:23:07.535 "rdma_cm_event_timeout_ms": 0, 00:23:07.535 "dhchap_digests": [ 00:23:07.535 "sha256", 00:23:07.535 "sha384", 00:23:07.535 "sha512" 00:23:07.535 ], 00:23:07.535 "dhchap_dhgroups": [ 00:23:07.535 "null", 00:23:07.535 "ffdhe2048", 00:23:07.535 "ffdhe3072", 00:23:07.535 "ffdhe4096", 00:23:07.535 "ffdhe6144", 00:23:07.535 "ffdhe8192" 00:23:07.535 ] 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "bdev_nvme_set_hotplug", 00:23:07.535 "params": { 00:23:07.535 "period_us": 100000, 00:23:07.535 "enable": false 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "bdev_malloc_create", 00:23:07.535 "params": { 00:23:07.535 "name": "malloc0", 00:23:07.535 "num_blocks": 8192, 00:23:07.535 "block_size": 4096, 00:23:07.535 "physical_block_size": 4096, 00:23:07.535 "uuid": "0d542ad0-33e5-435f-8c1d-3f5fb74b94a4", 00:23:07.535 "optimal_io_boundary": 0, 00:23:07.535 "md_size": 0, 00:23:07.535 "dif_type": 0, 00:23:07.535 "dif_is_head_of_md": false, 00:23:07.535 "dif_pi_format": 0 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "bdev_wait_for_examine" 00:23:07.535 } 00:23:07.535 ] 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "subsystem": "nbd", 00:23:07.535 "config": [] 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "subsystem": "scheduler", 00:23:07.535 "config": [ 00:23:07.535 { 00:23:07.535 "method": "framework_set_scheduler", 00:23:07.535 "params": { 00:23:07.535 "name": "static" 00:23:07.535 } 00:23:07.535 } 00:23:07.535 ] 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "subsystem": "nvmf", 00:23:07.535 "config": [ 00:23:07.535 { 00:23:07.535 "method": "nvmf_set_config", 00:23:07.535 "params": { 00:23:07.535 "discovery_filter": "match_any", 00:23:07.535 "admin_cmd_passthru": { 00:23:07.535 "identify_ctrlr": false 00:23:07.535 }, 00:23:07.535 "dhchap_digests": [ 00:23:07.535 "sha256", 00:23:07.535 "sha384", 00:23:07.535 "sha512" 00:23:07.535 ], 00:23:07.535 "dhchap_dhgroups": [ 00:23:07.535 "null", 00:23:07.535 "ffdhe2048", 00:23:07.535 "ffdhe3072", 00:23:07.535 "ffdhe4096", 00:23:07.535 "ffdhe6144", 00:23:07.535 "ffdhe8192" 00:23:07.535 ] 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_set_max_subsystems", 00:23:07.535 "params": { 00:23:07.535 "max_subsystems": 1024 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_set_crdt", 00:23:07.535 "params": { 00:23:07.535 "crdt1": 0, 00:23:07.535 "crdt2": 0, 00:23:07.535 "crdt3": 0 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_create_transport", 00:23:07.535 "params": { 00:23:07.535 "trtype": "TCP", 00:23:07.535 "max_queue_depth": 128, 00:23:07.535 "max_io_qpairs_per_ctrlr": 127, 00:23:07.535 "in_capsule_data_size": 4096, 00:23:07.535 "max_io_size": 131072, 00:23:07.535 "io_unit_size": 131072, 00:23:07.535 "max_aq_depth": 128, 00:23:07.535 "num_shared_buffers": 511, 00:23:07.535 "buf_cache_size": 4294967295, 00:23:07.535 "dif_insert_or_strip": false, 00:23:07.535 "zcopy": false, 00:23:07.535 "c2h_success": false, 00:23:07.535 "sock_priority": 0, 00:23:07.535 "abort_timeout_sec": 1, 00:23:07.535 "ack_timeout": 0, 00:23:07.535 "data_wr_pool_size": 0 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_create_subsystem", 00:23:07.535 "params": { 00:23:07.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.535 "allow_any_host": false, 00:23:07.535 "serial_number": "00000000000000000000", 00:23:07.535 "model_number": "SPDK bdev Controller", 00:23:07.535 "max_namespaces": 32, 00:23:07.535 "min_cntlid": 1, 00:23:07.535 "max_cntlid": 65519, 00:23:07.535 "ana_reporting": false 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_subsystem_add_host", 00:23:07.535 "params": { 00:23:07.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.535 "host": "nqn.2016-06.io.spdk:host1", 00:23:07.535 "psk": "key0" 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_subsystem_add_ns", 00:23:07.535 "params": { 00:23:07.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.535 "namespace": { 00:23:07.535 "nsid": 1, 00:23:07.535 "bdev_name": "malloc0", 00:23:07.535 "nguid": "0D542AD033E5435F8C1D3F5FB74B94A4", 00:23:07.535 "uuid": "0d542ad0-33e5-435f-8c1d-3f5fb74b94a4", 00:23:07.535 "no_auto_visible": false 00:23:07.535 } 00:23:07.535 } 00:23:07.535 }, 00:23:07.535 { 00:23:07.535 "method": "nvmf_subsystem_add_listener", 00:23:07.535 "params": { 00:23:07.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.535 "listen_address": { 00:23:07.535 "trtype": "TCP", 00:23:07.535 "adrfam": "IPv4", 00:23:07.535 "traddr": "10.0.0.2", 00:23:07.535 "trsvcid": "4420" 00:23:07.535 }, 00:23:07.535 "secure_channel": false, 00:23:07.535 "sock_impl": "ssl" 00:23:07.535 } 00:23:07.535 } 00:23:07.535 ] 00:23:07.535 } 00:23:07.535 ] 00:23:07.535 }' 00:23:07.535 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:07.795 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:07.795 "subsystems": [ 00:23:07.795 { 00:23:07.795 "subsystem": "keyring", 00:23:07.795 "config": [ 00:23:07.795 { 00:23:07.795 "method": "keyring_file_add_key", 00:23:07.795 "params": { 00:23:07.795 "name": "key0", 00:23:07.795 "path": "/tmp/tmp.Sa0ojhDK0R" 00:23:07.795 } 00:23:07.795 } 00:23:07.795 ] 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "subsystem": "iobuf", 00:23:07.795 "config": [ 00:23:07.795 { 00:23:07.795 "method": "iobuf_set_options", 00:23:07.795 "params": { 00:23:07.795 "small_pool_count": 8192, 00:23:07.795 "large_pool_count": 1024, 00:23:07.795 "small_bufsize": 8192, 00:23:07.795 "large_bufsize": 135168 00:23:07.795 } 00:23:07.795 } 00:23:07.795 ] 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "subsystem": "sock", 00:23:07.795 "config": [ 00:23:07.795 { 00:23:07.795 "method": "sock_set_default_impl", 00:23:07.795 "params": { 00:23:07.795 "impl_name": "posix" 00:23:07.795 } 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "method": "sock_impl_set_options", 00:23:07.795 "params": { 00:23:07.795 "impl_name": "ssl", 00:23:07.795 "recv_buf_size": 4096, 00:23:07.795 "send_buf_size": 4096, 00:23:07.795 "enable_recv_pipe": true, 00:23:07.795 "enable_quickack": false, 00:23:07.795 "enable_placement_id": 0, 00:23:07.795 "enable_zerocopy_send_server": true, 00:23:07.795 "enable_zerocopy_send_client": false, 00:23:07.795 "zerocopy_threshold": 0, 00:23:07.795 "tls_version": 0, 00:23:07.795 "enable_ktls": false 00:23:07.795 } 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "method": "sock_impl_set_options", 00:23:07.795 "params": { 00:23:07.795 "impl_name": "posix", 00:23:07.795 "recv_buf_size": 2097152, 00:23:07.795 "send_buf_size": 2097152, 00:23:07.795 "enable_recv_pipe": true, 00:23:07.795 "enable_quickack": false, 00:23:07.795 "enable_placement_id": 0, 00:23:07.795 "enable_zerocopy_send_server": true, 00:23:07.795 "enable_zerocopy_send_client": false, 00:23:07.795 "zerocopy_threshold": 0, 00:23:07.795 "tls_version": 0, 00:23:07.795 "enable_ktls": false 00:23:07.795 } 00:23:07.795 } 00:23:07.795 ] 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "subsystem": "vmd", 00:23:07.795 "config": [] 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "subsystem": "accel", 00:23:07.795 "config": [ 00:23:07.795 { 00:23:07.795 "method": "accel_set_options", 00:23:07.795 "params": { 00:23:07.795 "small_cache_size": 128, 00:23:07.795 "large_cache_size": 16, 00:23:07.795 "task_count": 2048, 00:23:07.795 "sequence_count": 2048, 00:23:07.795 "buf_count": 2048 00:23:07.795 } 00:23:07.795 } 00:23:07.795 ] 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "subsystem": "bdev", 00:23:07.795 "config": [ 00:23:07.795 { 00:23:07.795 "method": "bdev_set_options", 00:23:07.795 "params": { 00:23:07.795 "bdev_io_pool_size": 65535, 00:23:07.795 "bdev_io_cache_size": 256, 00:23:07.795 "bdev_auto_examine": true, 00:23:07.795 "iobuf_small_cache_size": 128, 00:23:07.795 "iobuf_large_cache_size": 16 00:23:07.795 } 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "method": "bdev_raid_set_options", 00:23:07.795 "params": { 00:23:07.795 "process_window_size_kb": 1024, 00:23:07.795 "process_max_bandwidth_mb_sec": 0 00:23:07.795 } 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "method": "bdev_iscsi_set_options", 00:23:07.795 "params": { 00:23:07.795 "timeout_sec": 30 00:23:07.795 } 00:23:07.795 }, 00:23:07.795 { 00:23:07.795 "method": "bdev_nvme_set_options", 00:23:07.795 "params": { 00:23:07.795 "action_on_timeout": "none", 00:23:07.795 "timeout_us": 0, 00:23:07.795 "timeout_admin_us": 0, 00:23:07.795 "keep_alive_timeout_ms": 10000, 00:23:07.795 "arbitration_burst": 0, 00:23:07.795 "low_priority_weight": 0, 00:23:07.795 "medium_priority_weight": 0, 00:23:07.795 "high_priority_weight": 0, 00:23:07.795 "nvme_adminq_poll_period_us": 10000, 00:23:07.795 "nvme_ioq_poll_period_us": 0, 00:23:07.795 "io_queue_requests": 512, 00:23:07.795 "delay_cmd_submit": true, 00:23:07.795 "transport_retry_count": 4, 00:23:07.795 "bdev_retry_count": 3, 00:23:07.796 "transport_ack_timeout": 0, 00:23:07.796 "ctrlr_loss_timeout_sec": 0, 00:23:07.796 "reconnect_delay_sec": 0, 00:23:07.796 "fast_io_fail_timeout_sec": 0, 00:23:07.796 "disable_auto_failback": false, 00:23:07.796 "generate_uuids": false, 00:23:07.796 "transport_tos": 0, 00:23:07.796 "nvme_error_stat": false, 00:23:07.796 "rdma_srq_size": 0, 00:23:07.796 "io_path_stat": false, 00:23:07.796 "allow_accel_sequence": false, 00:23:07.796 "rdma_max_cq_size": 0, 00:23:07.796 "rdma_cm_event_timeout_ms": 0, 00:23:07.796 "dhchap_digests": [ 00:23:07.796 "sha256", 00:23:07.796 "sha384", 00:23:07.796 "sha512" 00:23:07.796 ], 00:23:07.796 "dhchap_dhgroups": [ 00:23:07.796 "null", 00:23:07.796 "ffdhe2048", 00:23:07.796 "ffdhe3072", 00:23:07.796 "ffdhe4096", 00:23:07.796 "ffdhe6144", 00:23:07.796 "ffdhe8192" 00:23:07.796 ] 00:23:07.796 } 00:23:07.796 }, 00:23:07.796 { 00:23:07.796 "method": "bdev_nvme_attach_controller", 00:23:07.796 "params": { 00:23:07.796 "name": "nvme0", 00:23:07.796 "trtype": "TCP", 00:23:07.796 "adrfam": "IPv4", 00:23:07.796 "traddr": "10.0.0.2", 00:23:07.796 "trsvcid": "4420", 00:23:07.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.796 "prchk_reftag": false, 00:23:07.796 "prchk_guard": false, 00:23:07.796 "ctrlr_loss_timeout_sec": 0, 00:23:07.796 "reconnect_delay_sec": 0, 00:23:07.796 "fast_io_fail_timeout_sec": 0, 00:23:07.796 "psk": "key0", 00:23:07.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.796 "hdgst": false, 00:23:07.796 "ddgst": false, 00:23:07.796 "multipath": "multipath" 00:23:07.796 } 00:23:07.796 }, 00:23:07.796 { 00:23:07.796 "method": "bdev_nvme_set_hotplug", 00:23:07.796 "params": { 00:23:07.796 "period_us": 100000, 00:23:07.796 "enable": false 00:23:07.796 } 00:23:07.796 }, 00:23:07.796 { 00:23:07.796 "method": "bdev_enable_histogram", 00:23:07.796 "params": { 00:23:07.796 "name": "nvme0n1", 00:23:07.796 "enable": true 00:23:07.796 } 00:23:07.796 }, 00:23:07.796 { 00:23:07.796 "method": "bdev_wait_for_examine" 00:23:07.796 } 00:23:07.796 ] 00:23:07.796 }, 00:23:07.796 { 00:23:07.796 "subsystem": "nbd", 00:23:07.796 "config": [] 00:23:07.796 } 00:23:07.796 ] 00:23:07.796 }' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1136333 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1136333 ']' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1136333 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136333 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136333' 00:23:07.796 killing process with pid 1136333 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1136333 00:23:07.796 Received shutdown signal, test time was about 1.000000 seconds 00:23:07.796 00:23:07.796 Latency(us) 00:23:07.796 [2024-10-14T15:40:06.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.796 [2024-10-14T15:40:06.934Z] =================================================================================================================== 00:23:07.796 [2024-10-14T15:40:06.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1136333 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1136314 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1136314 ']' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1136314 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.796 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136314 00:23:08.056 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.056 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.056 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136314' 00:23:08.056 killing process with pid 1136314 00:23:08.056 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1136314 00:23:08.056 17:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1136314 00:23:08.056 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:08.056 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:08.056 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.056 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:08.056 "subsystems": [ 00:23:08.056 { 00:23:08.056 "subsystem": "keyring", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "keyring_file_add_key", 00:23:08.056 "params": { 00:23:08.056 "name": "key0", 00:23:08.056 "path": "/tmp/tmp.Sa0ojhDK0R" 00:23:08.056 } 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "iobuf", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "iobuf_set_options", 00:23:08.056 "params": { 00:23:08.056 "small_pool_count": 8192, 00:23:08.056 "large_pool_count": 1024, 00:23:08.056 "small_bufsize": 8192, 00:23:08.056 "large_bufsize": 135168 00:23:08.056 } 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "sock", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "sock_set_default_impl", 00:23:08.056 "params": { 00:23:08.056 "impl_name": "posix" 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "sock_impl_set_options", 00:23:08.056 "params": { 00:23:08.056 "impl_name": "ssl", 00:23:08.056 "recv_buf_size": 4096, 00:23:08.056 "send_buf_size": 4096, 00:23:08.056 "enable_recv_pipe": true, 00:23:08.056 "enable_quickack": false, 00:23:08.056 "enable_placement_id": 0, 00:23:08.056 "enable_zerocopy_send_server": true, 00:23:08.056 "enable_zerocopy_send_client": false, 00:23:08.056 "zerocopy_threshold": 0, 00:23:08.056 "tls_version": 0, 00:23:08.056 "enable_ktls": false 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "sock_impl_set_options", 00:23:08.056 "params": { 00:23:08.056 "impl_name": "posix", 00:23:08.056 "recv_buf_size": 2097152, 00:23:08.056 "send_buf_size": 2097152, 00:23:08.056 "enable_recv_pipe": true, 00:23:08.056 "enable_quickack": false, 00:23:08.056 "enable_placement_id": 0, 00:23:08.056 "enable_zerocopy_send_server": true, 00:23:08.056 "enable_zerocopy_send_client": false, 00:23:08.056 "zerocopy_threshold": 0, 00:23:08.056 "tls_version": 0, 00:23:08.056 "enable_ktls": false 00:23:08.056 } 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "vmd", 00:23:08.056 "config": [] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "accel", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "accel_set_options", 00:23:08.056 "params": { 00:23:08.056 "small_cache_size": 128, 00:23:08.056 "large_cache_size": 16, 00:23:08.056 "task_count": 2048, 00:23:08.056 "sequence_count": 2048, 00:23:08.056 "buf_count": 2048 00:23:08.056 } 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "bdev", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "bdev_set_options", 00:23:08.056 "params": { 00:23:08.056 "bdev_io_pool_size": 65535, 00:23:08.056 "bdev_io_cache_size": 256, 00:23:08.056 "bdev_auto_examine": true, 00:23:08.056 "iobuf_small_cache_size": 128, 00:23:08.056 "iobuf_large_cache_size": 16 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_raid_set_options", 00:23:08.056 "params": { 00:23:08.056 "process_window_size_kb": 1024, 00:23:08.056 "process_max_bandwidth_mb_sec": 0 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_iscsi_set_options", 00:23:08.056 "params": { 00:23:08.056 "timeout_sec": 30 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_nvme_set_options", 00:23:08.056 "params": { 00:23:08.056 "action_on_timeout": "none", 00:23:08.056 "timeout_us": 0, 00:23:08.056 "timeout_admin_us": 0, 00:23:08.056 "keep_alive_timeout_ms": 10000, 00:23:08.056 "arbitration_burst": 0, 00:23:08.056 "low_priority_weight": 0, 00:23:08.056 "medium_priority_weight": 0, 00:23:08.056 "high_priority_weight": 0, 00:23:08.056 "nvme_adminq_poll_period_us": 10000, 00:23:08.056 "nvme_ioq_poll_period_us": 0, 00:23:08.056 "io_queue_requests": 0, 00:23:08.056 "delay_cmd_submit": true, 00:23:08.056 "transport_retry_count": 4, 00:23:08.056 "bdev_retry_count": 3, 00:23:08.056 "transport_ack_timeout": 0, 00:23:08.056 "ctrlr_loss_timeout_sec": 0, 00:23:08.056 "reconnect_delay_sec": 0, 00:23:08.056 "fast_io_fail_timeout_sec": 0, 00:23:08.056 "disable_auto_failback": false, 00:23:08.056 "generate_uuids": false, 00:23:08.056 "transport_tos": 0, 00:23:08.056 "nvme_error_stat": false, 00:23:08.056 "rdma_srq_size": 0, 00:23:08.056 "io_path_stat": false, 00:23:08.056 "allow_accel_sequence": false, 00:23:08.056 "rdma_max_cq_size": 0, 00:23:08.056 "rdma_cm_event_timeout_ms": 0, 00:23:08.056 "dhchap_digests": [ 00:23:08.056 "sha256", 00:23:08.056 "sha384", 00:23:08.056 "sha512" 00:23:08.056 ], 00:23:08.056 "dhchap_dhgroups": [ 00:23:08.056 "null", 00:23:08.056 "ffdhe2048", 00:23:08.056 "ffdhe3072", 00:23:08.056 "ffdhe4096", 00:23:08.056 "ffdhe6144", 00:23:08.056 "ffdhe8192" 00:23:08.056 ] 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_nvme_set_hotplug", 00:23:08.056 "params": { 00:23:08.056 "period_us": 100000, 00:23:08.056 "enable": false 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_malloc_create", 00:23:08.056 "params": { 00:23:08.056 "name": "malloc0", 00:23:08.056 "num_blocks": 8192, 00:23:08.056 "block_size": 4096, 00:23:08.056 "physical_block_size": 4096, 00:23:08.056 "uuid": "0d542ad0-33e5-435f-8c1d-3f5fb74b94a4", 00:23:08.056 "optimal_io_boundary": 0, 00:23:08.056 "md_size": 0, 00:23:08.056 "dif_type": 0, 00:23:08.056 "dif_is_head_of_md": false, 00:23:08.056 "dif_pi_format": 0 00:23:08.056 } 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "method": "bdev_wait_for_examine" 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "nbd", 00:23:08.056 "config": [] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "scheduler", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "framework_set_scheduler", 00:23:08.056 "params": { 00:23:08.056 "name": "static" 00:23:08.056 } 00:23:08.056 } 00:23:08.056 ] 00:23:08.056 }, 00:23:08.056 { 00:23:08.056 "subsystem": "nvmf", 00:23:08.056 "config": [ 00:23:08.056 { 00:23:08.056 "method": "nvmf_set_config", 00:23:08.056 "params": { 00:23:08.056 "discovery_filter": "match_any", 00:23:08.057 "admin_cmd_passthru": { 00:23:08.057 "identify_ctrlr": false 00:23:08.057 }, 00:23:08.057 "dhchap_digests": [ 00:23:08.057 "sha256", 00:23:08.057 "sha384", 00:23:08.057 "sha512" 00:23:08.057 ], 00:23:08.057 "dhchap_dhgroups": [ 00:23:08.057 "null", 00:23:08.057 "ffdhe2048", 00:23:08.057 "ffdhe3072", 00:23:08.057 "ffdhe4096", 00:23:08.057 "ffdhe6144", 00:23:08.057 "ffdhe8192" 00:23:08.057 ] 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_set_max_subsystems", 00:23:08.057 "params": { 00:23:08.057 "max_subsystems": 1024 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_set_crdt", 00:23:08.057 "params": { 00:23:08.057 "crdt1": 0, 00:23:08.057 "crdt2": 0, 00:23:08.057 "crdt3": 0 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_create_transport", 00:23:08.057 "params": { 00:23:08.057 "trtype": "TCP", 00:23:08.057 "max_queue_depth": 128, 00:23:08.057 "max_io_qpairs_per_ctrlr": 127, 00:23:08.057 "in_capsule_data_size": 4096, 00:23:08.057 "max_io_size": 131072, 00:23:08.057 "io_unit_size": 131072, 00:23:08.057 "max_aq_depth": 128, 00:23:08.057 "num_shared_buffers": 511, 00:23:08.057 "buf_cache_size": 4294967295, 00:23:08.057 "dif_insert_or_strip": false, 00:23:08.057 "zcopy": false, 00:23:08.057 "c2h_success": false, 00:23:08.057 "sock_priority": 0, 00:23:08.057 "abort_timeout_sec": 1, 00:23:08.057 "ack_timeout": 0, 00:23:08.057 "data_wr_pool_size": 0 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_create_subsystem", 00:23:08.057 "params": { 00:23:08.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.057 "allow_any_host": false, 00:23:08.057 "serial_number": "00000000000000000000", 00:23:08.057 "model_number": "SPDK bdev Controller", 00:23:08.057 "max_namespaces": 32, 00:23:08.057 "min_cntlid": 1, 00:23:08.057 "max_cntlid": 65519, 00:23:08.057 "ana_reporting": false 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_subsystem_add_host", 00:23:08.057 "params": { 00:23:08.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.057 "host": "nqn.2016-06.io.spdk:host1", 00:23:08.057 "psk": "key0" 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_subsystem_add_ns", 00:23:08.057 "params": { 00:23:08.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.057 "namespace": { 00:23:08.057 "nsid": 1, 00:23:08.057 "bdev_name": "malloc0", 00:23:08.057 "nguid": "0D542AD033E5435F8C1D3F5FB74B94A4", 00:23:08.057 "uuid": "0d542ad0-33e5-435f-8c1d-3f5fb74b94a4", 00:23:08.057 "no_auto_visible": false 00:23:08.057 } 00:23:08.057 } 00:23:08.057 }, 00:23:08.057 { 00:23:08.057 "method": "nvmf_subsystem_add_listener", 00:23:08.057 "params": { 00:23:08.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.057 "listen_address": { 00:23:08.057 "trtype": "TCP", 00:23:08.057 "adrfam": "IPv4", 00:23:08.057 "traddr": "10.0.0.2", 00:23:08.057 "trsvcid": "4420" 00:23:08.057 }, 00:23:08.057 "secure_channel": false, 00:23:08.057 "sock_impl": "ssl" 00:23:08.057 } 00:23:08.057 } 00:23:08.057 ] 00:23:08.057 } 00:23:08.057 ] 00:23:08.057 }' 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1136810 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1136810 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1136810 ']' 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.057 17:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.057 [2024-10-14 17:40:07.177012] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:08.057 [2024-10-14 17:40:07.177056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.316 [2024-10-14 17:40:07.248772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.316 [2024-10-14 17:40:07.289195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.316 [2024-10-14 17:40:07.289229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.316 [2024-10-14 17:40:07.289236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.316 [2024-10-14 17:40:07.289243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.316 [2024-10-14 17:40:07.289248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.316 [2024-10-14 17:40:07.289862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.575 [2024-10-14 17:40:07.502225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.575 [2024-10-14 17:40:07.534259] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.575 [2024-10-14 17:40:07.534467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1137052 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1137052 /var/tmp/bdevperf.sock 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1137052 ']' 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.143 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:09.143 "subsystems": [ 00:23:09.143 { 00:23:09.143 "subsystem": "keyring", 00:23:09.143 "config": [ 00:23:09.143 { 00:23:09.143 "method": "keyring_file_add_key", 00:23:09.143 "params": { 00:23:09.143 "name": "key0", 00:23:09.143 "path": "/tmp/tmp.Sa0ojhDK0R" 00:23:09.143 } 00:23:09.143 } 00:23:09.143 ] 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "subsystem": "iobuf", 00:23:09.143 "config": [ 00:23:09.143 { 00:23:09.143 "method": "iobuf_set_options", 00:23:09.143 "params": { 00:23:09.143 "small_pool_count": 8192, 00:23:09.143 "large_pool_count": 1024, 00:23:09.143 "small_bufsize": 8192, 00:23:09.143 "large_bufsize": 135168 00:23:09.143 } 00:23:09.143 } 00:23:09.143 ] 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "subsystem": "sock", 00:23:09.143 "config": [ 00:23:09.143 { 00:23:09.143 "method": "sock_set_default_impl", 00:23:09.143 "params": { 00:23:09.143 "impl_name": "posix" 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "sock_impl_set_options", 00:23:09.143 "params": { 00:23:09.143 "impl_name": "ssl", 00:23:09.143 "recv_buf_size": 4096, 00:23:09.143 "send_buf_size": 4096, 00:23:09.143 "enable_recv_pipe": true, 00:23:09.143 "enable_quickack": false, 00:23:09.143 "enable_placement_id": 0, 00:23:09.143 "enable_zerocopy_send_server": true, 00:23:09.143 "enable_zerocopy_send_client": false, 00:23:09.143 "zerocopy_threshold": 0, 00:23:09.143 "tls_version": 0, 00:23:09.143 "enable_ktls": false 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "sock_impl_set_options", 00:23:09.143 "params": { 00:23:09.143 "impl_name": "posix", 00:23:09.143 "recv_buf_size": 2097152, 00:23:09.143 "send_buf_size": 2097152, 00:23:09.143 "enable_recv_pipe": true, 00:23:09.143 "enable_quickack": false, 00:23:09.143 "enable_placement_id": 0, 00:23:09.143 "enable_zerocopy_send_server": true, 00:23:09.143 "enable_zerocopy_send_client": false, 00:23:09.143 "zerocopy_threshold": 0, 00:23:09.143 "tls_version": 0, 00:23:09.143 "enable_ktls": false 00:23:09.143 } 00:23:09.143 } 00:23:09.143 ] 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "subsystem": "vmd", 00:23:09.143 "config": [] 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "subsystem": "accel", 00:23:09.143 "config": [ 00:23:09.143 { 00:23:09.143 "method": "accel_set_options", 00:23:09.143 "params": { 00:23:09.143 "small_cache_size": 128, 00:23:09.143 "large_cache_size": 16, 00:23:09.143 "task_count": 2048, 00:23:09.143 "sequence_count": 2048, 00:23:09.143 "buf_count": 2048 00:23:09.143 } 00:23:09.143 } 00:23:09.143 ] 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "subsystem": "bdev", 00:23:09.143 "config": [ 00:23:09.143 { 00:23:09.143 "method": "bdev_set_options", 00:23:09.143 "params": { 00:23:09.143 "bdev_io_pool_size": 65535, 00:23:09.143 "bdev_io_cache_size": 256, 00:23:09.143 "bdev_auto_examine": true, 00:23:09.143 "iobuf_small_cache_size": 128, 00:23:09.143 "iobuf_large_cache_size": 16 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "bdev_raid_set_options", 00:23:09.143 "params": { 00:23:09.143 "process_window_size_kb": 1024, 00:23:09.143 "process_max_bandwidth_mb_sec": 0 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "bdev_iscsi_set_options", 00:23:09.143 "params": { 00:23:09.143 "timeout_sec": 30 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "bdev_nvme_set_options", 00:23:09.143 "params": { 00:23:09.143 "action_on_timeout": "none", 00:23:09.143 "timeout_us": 0, 00:23:09.143 "timeout_admin_us": 0, 00:23:09.143 "keep_alive_timeout_ms": 10000, 00:23:09.143 "arbitration_burst": 0, 00:23:09.143 "low_priority_weight": 0, 00:23:09.143 "medium_priority_weight": 0, 00:23:09.143 "high_priority_weight": 0, 00:23:09.143 "nvme_adminq_poll_period_us": 10000, 00:23:09.143 "nvme_ioq_poll_period_us": 0, 00:23:09.143 "io_queue_requests": 512, 00:23:09.143 "delay_cmd_submit": true, 00:23:09.143 "transport_retry_count": 4, 00:23:09.143 "bdev_retry_count": 3, 00:23:09.143 "transport_ack_timeout": 0, 00:23:09.143 "ctrlr_loss_timeout_sec": 0, 00:23:09.143 "reconnect_delay_sec": 0, 00:23:09.143 "fast_io_fail_timeout_sec": 0, 00:23:09.143 "disable_auto_failback": false, 00:23:09.143 "generate_uuids": false, 00:23:09.143 "transport_tos": 0, 00:23:09.143 "nvme_error_stat": false, 00:23:09.143 "rdma_srq_size": 0, 00:23:09.143 "io_path_stat": false, 00:23:09.143 "allow_accel_sequence": false, 00:23:09.143 "rdma_max_cq_size": 0, 00:23:09.143 "rdma_cm_event_timeout_ms": 0, 00:23:09.143 "dhchap_digests": [ 00:23:09.143 "sha256", 00:23:09.143 "sha384", 00:23:09.143 "sha512" 00:23:09.143 ], 00:23:09.143 "dhchap_dhgroups": [ 00:23:09.143 "null", 00:23:09.143 "ffdhe2048", 00:23:09.143 "ffdhe3072", 00:23:09.143 "ffdhe4096", 00:23:09.143 "ffdhe6144", 00:23:09.143 "ffdhe8192" 00:23:09.143 ] 00:23:09.143 } 00:23:09.143 }, 00:23:09.143 { 00:23:09.143 "method": "bdev_nvme_attach_controller", 00:23:09.143 "params": { 00:23:09.143 "name": "nvme0", 00:23:09.143 "trtype": "TCP", 00:23:09.144 "adrfam": "IPv4", 00:23:09.144 "traddr": "10.0.0.2", 00:23:09.144 "trsvcid": "4420", 00:23:09.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.144 "prchk_reftag": false, 00:23:09.144 "prchk_guard": false, 00:23:09.144 "ctrlr_loss_timeout_sec": 0, 00:23:09.144 "reconnect_delay_sec": 0, 00:23:09.144 "fast_io_fail_timeout_sec": 0, 00:23:09.144 "psk": "key0", 00:23:09.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.144 "hdgst": false, 00:23:09.144 "ddgst": false, 00:23:09.144 "multipath": "multipath" 00:23:09.144 } 00:23:09.144 }, 00:23:09.144 { 00:23:09.144 "method": "bdev_nvme_set_hotplug", 00:23:09.144 "params": { 00:23:09.144 "period_us": 100000, 00:23:09.144 "enable": false 00:23:09.144 } 00:23:09.144 }, 00:23:09.144 { 00:23:09.144 "method": "bdev_enable_histogram", 00:23:09.144 "params": { 00:23:09.144 "name": "nvme0n1", 00:23:09.144 "enable": true 00:23:09.144 } 00:23:09.144 }, 00:23:09.144 { 00:23:09.144 "method": "bdev_wait_for_examine" 00:23:09.144 } 00:23:09.144 ] 00:23:09.144 }, 00:23:09.144 { 00:23:09.144 "subsystem": "nbd", 00:23:09.144 "config": [] 00:23:09.144 } 00:23:09.144 ] 00:23:09.144 }' 00:23:09.144 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.144 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.144 [2024-10-14 17:40:08.089514] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:09.144 [2024-10-14 17:40:08.089561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137052 ] 00:23:09.144 [2024-10-14 17:40:08.157046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.144 [2024-10-14 17:40:08.197380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.402 [2024-10-14 17:40:08.350041] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.969 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.969 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:09.969 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:09.969 17:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:10.228 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.228 17:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.228 Running I/O for 1 seconds... 00:23:11.164 5328.00 IOPS, 20.81 MiB/s 00:23:11.164 Latency(us) 00:23:11.164 [2024-10-14T15:40:10.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.164 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:11.164 Verification LBA range: start 0x0 length 0x2000 00:23:11.164 nvme0n1 : 1.01 5380.88 21.02 0.00 0.00 23630.29 4993.22 38947.11 00:23:11.164 [2024-10-14T15:40:10.302Z] =================================================================================================================== 00:23:11.164 [2024-10-14T15:40:10.302Z] Total : 5380.88 21.02 0.00 0.00 23630.29 4993.22 38947.11 00:23:11.164 { 00:23:11.164 "results": [ 00:23:11.164 { 00:23:11.164 "job": "nvme0n1", 00:23:11.164 "core_mask": "0x2", 00:23:11.164 "workload": "verify", 00:23:11.164 "status": "finished", 00:23:11.164 "verify_range": { 00:23:11.164 "start": 0, 00:23:11.164 "length": 8192 00:23:11.164 }, 00:23:11.164 "queue_depth": 128, 00:23:11.164 "io_size": 4096, 00:23:11.164 "runtime": 1.013961, 00:23:11.164 "iops": 5380.877568269391, 00:23:11.164 "mibps": 21.01905300105231, 00:23:11.164 "io_failed": 0, 00:23:11.164 "io_timeout": 0, 00:23:11.164 "avg_latency_us": 23630.29360145231, 00:23:11.164 "min_latency_us": 4993.219047619048, 00:23:11.164 "max_latency_us": 38947.10857142857 00:23:11.164 } 00:23:11.164 ], 00:23:11.164 "core_count": 1 00:23:11.164 } 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:11.164 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:11.164 nvmf_trace.0 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1137052 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1137052 ']' 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1137052 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1137052 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1137052' 00:23:11.423 killing process with pid 1137052 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1137052 00:23:11.423 Received shutdown signal, test time was about 1.000000 seconds 00:23:11.423 00:23:11.423 Latency(us) 00:23:11.423 [2024-10-14T15:40:10.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.423 [2024-10-14T15:40:10.561Z] =================================================================================================================== 00:23:11.423 [2024-10-14T15:40:10.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1137052 00:23:11.423 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.424 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.424 rmmod nvme_tcp 00:23:11.424 rmmod nvme_fabrics 00:23:11.682 rmmod nvme_keyring 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1136810 ']' 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1136810 ']' 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136810' 00:23:11.682 killing process with pid 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1136810 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:23:11.682 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:11.683 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:23:11.942 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.942 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.942 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.942 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.942 17:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zCroqZxuk1 /tmp/tmp.NpqqPHjz6y /tmp/tmp.Sa0ojhDK0R 00:23:13.850 00:23:13.850 real 1m18.972s 00:23:13.850 user 1m59.648s 00:23:13.850 sys 0m31.268s 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 ************************************ 00:23:13.850 END TEST nvmf_tls 00:23:13.850 ************************************ 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.850 ************************************ 00:23:13.850 START TEST nvmf_fips 00:23:13.850 ************************************ 00:23:13.850 17:40:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:14.110 * Looking for test storage... 00:23:14.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:14.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.110 --rc genhtml_branch_coverage=1 00:23:14.110 --rc genhtml_function_coverage=1 00:23:14.110 --rc genhtml_legend=1 00:23:14.110 --rc geninfo_all_blocks=1 00:23:14.110 --rc geninfo_unexecuted_blocks=1 00:23:14.110 00:23:14.110 ' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:14.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.110 --rc genhtml_branch_coverage=1 00:23:14.110 --rc genhtml_function_coverage=1 00:23:14.110 --rc genhtml_legend=1 00:23:14.110 --rc geninfo_all_blocks=1 00:23:14.110 --rc geninfo_unexecuted_blocks=1 00:23:14.110 00:23:14.110 ' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:14.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.110 --rc genhtml_branch_coverage=1 00:23:14.110 --rc genhtml_function_coverage=1 00:23:14.110 --rc genhtml_legend=1 00:23:14.110 --rc geninfo_all_blocks=1 00:23:14.110 --rc geninfo_unexecuted_blocks=1 00:23:14.110 00:23:14.110 ' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:14.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.110 --rc genhtml_branch_coverage=1 00:23:14.110 --rc genhtml_function_coverage=1 00:23:14.110 --rc genhtml_legend=1 00:23:14.110 --rc geninfo_all_blocks=1 00:23:14.110 --rc geninfo_unexecuted_blocks=1 00:23:14.110 00:23:14.110 ' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:14.110 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:14.111 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:23:14.371 Error setting digest 00:23:14.371 40E2F892DE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:14.371 40E2F892DE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.371 17:40:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.944 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.944 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:23:20.944 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.945 17:40:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:23:20.945 00:23:20.945 --- 10.0.0.2 ping statistics --- 00:23:20.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.945 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:23:20.945 00:23:20.945 --- 10.0.0.1 ping statistics --- 00:23:20.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.945 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1140999 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1140999 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1140999 ']' 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.945 17:40:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:20.945 [2024-10-14 17:40:19.356216] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:20.945 [2024-10-14 17:40:19.356266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.945 [2024-10-14 17:40:19.429258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.945 [2024-10-14 17:40:19.469440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.945 [2024-10-14 17:40:19.469479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.945 [2024-10-14 17:40:19.469486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.945 [2024-10-14 17:40:19.469491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.945 [2024-10-14 17:40:19.469496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.945 [2024-10-14 17:40:19.470062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.YD5 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.YD5 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.YD5 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.YD5 00:23:21.204 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:21.463 [2024-10-14 17:40:20.397805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.463 [2024-10-14 17:40:20.413807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.463 [2024-10-14 17:40:20.413992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.463 malloc0 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1141125 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1141125 /var/tmp/bdevperf.sock 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1141125 ']' 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.463 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 [2024-10-14 17:40:20.542967] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:21.463 [2024-10-14 17:40:20.543017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141125 ] 00:23:21.722 [2024-10-14 17:40:20.612381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.722 [2024-10-14 17:40:20.654123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.722 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.722 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:23:21.722 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.YD5 00:23:21.981 17:40:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.981 [2024-10-14 17:40:21.097785] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.240 TLSTESTn1 00:23:22.240 17:40:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.240 Running I/O for 10 seconds... 00:23:24.555 5361.00 IOPS, 20.94 MiB/s [2024-10-14T15:40:24.630Z] 5458.00 IOPS, 21.32 MiB/s [2024-10-14T15:40:25.576Z] 5506.00 IOPS, 21.51 MiB/s [2024-10-14T15:40:26.512Z] 5512.25 IOPS, 21.53 MiB/s [2024-10-14T15:40:27.449Z] 5532.20 IOPS, 21.61 MiB/s [2024-10-14T15:40:28.386Z] 5550.67 IOPS, 21.68 MiB/s [2024-10-14T15:40:29.321Z] 5548.14 IOPS, 21.67 MiB/s [2024-10-14T15:40:30.699Z] 5569.88 IOPS, 21.76 MiB/s [2024-10-14T15:40:31.635Z] 5563.56 IOPS, 21.73 MiB/s [2024-10-14T15:40:31.635Z] 5546.80 IOPS, 21.67 MiB/s 00:23:32.497 Latency(us) 00:23:32.497 [2024-10-14T15:40:31.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.497 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.497 Verification LBA range: start 0x0 length 0x2000 00:23:32.497 TLSTESTn1 : 10.02 5550.37 21.68 0.00 0.00 23027.82 4868.39 47185.92 00:23:32.497 [2024-10-14T15:40:31.635Z] =================================================================================================================== 00:23:32.497 [2024-10-14T15:40:31.635Z] Total : 5550.37 21.68 0.00 0.00 23027.82 4868.39 47185.92 00:23:32.497 { 00:23:32.497 "results": [ 00:23:32.497 { 00:23:32.497 "job": "TLSTESTn1", 00:23:32.497 "core_mask": "0x4", 00:23:32.497 "workload": "verify", 00:23:32.497 "status": "finished", 00:23:32.497 "verify_range": { 00:23:32.497 "start": 0, 00:23:32.497 "length": 8192 00:23:32.497 }, 00:23:32.497 "queue_depth": 128, 00:23:32.497 "io_size": 4096, 00:23:32.497 "runtime": 10.016443, 00:23:32.498 "iops": 5550.373520819716, 00:23:32.498 "mibps": 21.681146565702015, 00:23:32.498 "io_failed": 0, 00:23:32.498 "io_timeout": 0, 00:23:32.498 "avg_latency_us": 23027.822267949756, 00:23:32.498 "min_latency_us": 4868.388571428572, 00:23:32.498 "max_latency_us": 47185.92 00:23:32.498 } 00:23:32.498 ], 00:23:32.498 "core_count": 1 00:23:32.498 } 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:32.498 nvmf_trace.0 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1141125 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1141125 ']' 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1141125 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1141125 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1141125' 00:23:32.498 killing process with pid 1141125 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1141125 00:23:32.498 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.498 00:23:32.498 Latency(us) 00:23:32.498 [2024-10-14T15:40:31.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.498 [2024-10-14T15:40:31.636Z] =================================================================================================================== 00:23:32.498 [2024-10-14T15:40:31.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1141125 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:32.498 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.757 rmmod nvme_tcp 00:23:32.757 rmmod nvme_fabrics 00:23:32.757 rmmod nvme_keyring 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1140999 ']' 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1140999 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1140999 ']' 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1140999 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1140999 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1140999' 00:23:32.757 killing process with pid 1140999 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1140999 00:23:32.757 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1140999 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.017 17:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.922 17:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.922 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.YD5 00:23:34.922 00:23:34.922 real 0m21.044s 00:23:34.922 user 0m22.123s 00:23:34.922 sys 0m9.500s 00:23:34.922 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.922 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:34.922 ************************************ 00:23:34.922 END TEST nvmf_fips 00:23:34.922 ************************************ 00:23:34.923 17:40:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:34.923 17:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.923 17:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.923 17:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:35.182 ************************************ 00:23:35.182 START TEST nvmf_control_msg_list 00:23:35.182 ************************************ 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:35.182 * Looking for test storage... 00:23:35.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.182 --rc genhtml_branch_coverage=1 00:23:35.182 --rc genhtml_function_coverage=1 00:23:35.182 --rc genhtml_legend=1 00:23:35.182 --rc geninfo_all_blocks=1 00:23:35.182 --rc geninfo_unexecuted_blocks=1 00:23:35.182 00:23:35.182 ' 00:23:35.182 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.182 --rc genhtml_branch_coverage=1 00:23:35.182 --rc genhtml_function_coverage=1 00:23:35.182 --rc genhtml_legend=1 00:23:35.182 --rc geninfo_all_blocks=1 00:23:35.182 --rc geninfo_unexecuted_blocks=1 00:23:35.183 00:23:35.183 ' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:35.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.183 --rc genhtml_branch_coverage=1 00:23:35.183 --rc genhtml_function_coverage=1 00:23:35.183 --rc genhtml_legend=1 00:23:35.183 --rc geninfo_all_blocks=1 00:23:35.183 --rc geninfo_unexecuted_blocks=1 00:23:35.183 00:23:35.183 ' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:35.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.183 --rc genhtml_branch_coverage=1 00:23:35.183 --rc genhtml_function_coverage=1 00:23:35.183 --rc genhtml_legend=1 00:23:35.183 --rc geninfo_all_blocks=1 00:23:35.183 --rc geninfo_unexecuted_blocks=1 00:23:35.183 00:23:35.183 ' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:35.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.183 17:40:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.755 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.755 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.755 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.756 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.756 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.756 17:40:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:23:41.756 00:23:41.756 --- 10.0.0.2 ping statistics --- 00:23:41.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.756 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:41.756 00:23:41.756 --- 10.0.0.1 ping statistics --- 00:23:41.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.756 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1146475 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1146475 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1146475 ']' 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.756 [2024-10-14 17:40:40.275020] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:41.756 [2024-10-14 17:40:40.275063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.756 [2024-10-14 17:40:40.344741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.756 [2024-10-14 17:40:40.385869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.756 [2024-10-14 17:40:40.385902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.756 [2024-10-14 17:40:40.385909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.756 [2024-10-14 17:40:40.385915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.756 [2024-10-14 17:40:40.385921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.756 [2024-10-14 17:40:40.386471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.756 [2024-10-14 17:40:40.526865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.756 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.757 Malloc0 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:41.757 [2024-10-14 17:40:40.567256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1146514 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1146516 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1146517 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:41.757 17:40:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1146514 00:23:41.757 [2024-10-14 17:40:40.655930] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:41.757 [2024-10-14 17:40:40.656126] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:41.757 [2024-10-14 17:40:40.656284] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:42.694 Initializing NVMe Controllers 00:23:42.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:42.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:42.694 Initialization complete. Launching workers. 00:23:42.694 ======================================================== 00:23:42.694 Latency(us) 00:23:42.694 Device Information : IOPS MiB/s Average min max 00:23:42.694 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40903.15 40818.59 40964.92 00:23:42.694 ======================================================== 00:23:42.694 Total : 25.00 0.10 40903.15 40818.59 40964.92 00:23:42.694 00:23:42.694 Initializing NVMe Controllers 00:23:42.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:42.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:42.694 Initialization complete. Launching workers. 00:23:42.694 ======================================================== 00:23:42.694 Latency(us) 00:23:42.694 Device Information : IOPS MiB/s Average min max 00:23:42.694 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40894.67 40801.83 41012.51 00:23:42.694 ======================================================== 00:23:42.694 Total : 25.00 0.10 40894.67 40801.83 41012.51 00:23:42.694 00:23:42.694 Initializing NVMe Controllers 00:23:42.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:42.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:42.694 Initialization complete. Launching workers. 00:23:42.694 ======================================================== 00:23:42.694 Latency(us) 00:23:42.694 Device Information : IOPS MiB/s Average min max 00:23:42.694 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40937.51 40755.78 41935.85 00:23:42.694 ======================================================== 00:23:42.694 Total : 25.00 0.10 40937.51 40755.78 41935.85 00:23:42.694 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1146516 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1146517 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:42.694 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.954 rmmod nvme_tcp 00:23:42.954 rmmod nvme_fabrics 00:23:42.954 rmmod nvme_keyring 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1146475 ']' 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1146475 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1146475 ']' 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1146475 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1146475 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1146475' 00:23:42.954 killing process with pid 1146475 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1146475 00:23:42.954 17:40:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1146475 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.214 17:40:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.118 00:23:45.118 real 0m10.112s 00:23:45.118 user 0m6.831s 00:23:45.118 sys 0m5.317s 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:45.118 ************************************ 00:23:45.118 END TEST nvmf_control_msg_list 00:23:45.118 ************************************ 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.118 17:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:45.382 ************************************ 00:23:45.382 START TEST nvmf_wait_for_buf 00:23:45.382 ************************************ 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:45.382 * Looking for test storage... 00:23:45.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:45.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.382 --rc genhtml_branch_coverage=1 00:23:45.382 --rc genhtml_function_coverage=1 00:23:45.382 --rc genhtml_legend=1 00:23:45.382 --rc geninfo_all_blocks=1 00:23:45.382 --rc geninfo_unexecuted_blocks=1 00:23:45.382 00:23:45.382 ' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:45.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.382 --rc genhtml_branch_coverage=1 00:23:45.382 --rc genhtml_function_coverage=1 00:23:45.382 --rc genhtml_legend=1 00:23:45.382 --rc geninfo_all_blocks=1 00:23:45.382 --rc geninfo_unexecuted_blocks=1 00:23:45.382 00:23:45.382 ' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:45.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.382 --rc genhtml_branch_coverage=1 00:23:45.382 --rc genhtml_function_coverage=1 00:23:45.382 --rc genhtml_legend=1 00:23:45.382 --rc geninfo_all_blocks=1 00:23:45.382 --rc geninfo_unexecuted_blocks=1 00:23:45.382 00:23:45.382 ' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:45.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.382 --rc genhtml_branch_coverage=1 00:23:45.382 --rc genhtml_function_coverage=1 00:23:45.382 --rc genhtml_legend=1 00:23:45.382 --rc geninfo_all_blocks=1 00:23:45.382 --rc geninfo_unexecuted_blocks=1 00:23:45.382 00:23:45.382 ' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.382 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.383 17:40:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.076 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.076 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.076 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:23:52.077 00:23:52.077 --- 10.0.0.2 ping statistics --- 00:23:52.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.077 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:52.077 00:23:52.077 --- 10.0.0.1 ping statistics --- 00:23:52.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.077 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1150261 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1150261 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1150261 ']' 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 [2024-10-14 17:40:50.507928] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:23:52.077 [2024-10-14 17:40:50.507975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.077 [2024-10-14 17:40:50.582586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.077 [2024-10-14 17:40:50.624908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.077 [2024-10-14 17:40:50.624938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.077 [2024-10-14 17:40:50.624946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.077 [2024-10-14 17:40:50.624952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.077 [2024-10-14 17:40:50.624958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.077 [2024-10-14 17:40:50.625511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 Malloc0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 [2024-10-14 17:40:50.802742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:52.077 [2024-10-14 17:40:50.826931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.077 17:40:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.078 [2024-10-14 17:40:50.899668] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:53.455 Initializing NVMe Controllers 00:23:53.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:53.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:53.455 Initialization complete. Launching workers. 00:23:53.455 ======================================================== 00:23:53.455 Latency(us) 00:23:53.455 Device Information : IOPS MiB/s Average min max 00:23:53.455 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.54 16.07 32208.43 7279.18 63861.57 00:23:53.455 ======================================================== 00:23:53.455 Total : 128.54 16.07 32208.43 7279.18 63861.57 00:23:53.455 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:53.455 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.456 rmmod nvme_tcp 00:23:53.456 rmmod nvme_fabrics 00:23:53.456 rmmod nvme_keyring 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1150261 ']' 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1150261 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1150261 ']' 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1150261 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1150261 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1150261' 00:23:53.456 killing process with pid 1150261 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1150261 00:23:53.456 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1150261 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.715 17:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.249 00:23:56.249 real 0m10.529s 00:23:56.249 user 0m3.949s 00:23:56.249 sys 0m5.015s 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:56.249 ************************************ 00:23:56.249 END TEST nvmf_wait_for_buf 00:23:56.249 ************************************ 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.249 17:40:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:01.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:01.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:01.522 Found net devices under 0000:86:00.0: cvl_0_0 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.522 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:01.523 Found net devices under 0000:86:00.1: cvl_0_1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.523 ************************************ 00:24:01.523 START TEST nvmf_perf_adq 00:24:01.523 ************************************ 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:01.523 * Looking for test storage... 00:24:01.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.523 --rc genhtml_branch_coverage=1 00:24:01.523 --rc genhtml_function_coverage=1 00:24:01.523 --rc genhtml_legend=1 00:24:01.523 --rc geninfo_all_blocks=1 00:24:01.523 --rc geninfo_unexecuted_blocks=1 00:24:01.523 00:24:01.523 ' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.523 --rc genhtml_branch_coverage=1 00:24:01.523 --rc genhtml_function_coverage=1 00:24:01.523 --rc genhtml_legend=1 00:24:01.523 --rc geninfo_all_blocks=1 00:24:01.523 --rc geninfo_unexecuted_blocks=1 00:24:01.523 00:24:01.523 ' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.523 --rc genhtml_branch_coverage=1 00:24:01.523 --rc genhtml_function_coverage=1 00:24:01.523 --rc genhtml_legend=1 00:24:01.523 --rc geninfo_all_blocks=1 00:24:01.523 --rc geninfo_unexecuted_blocks=1 00:24:01.523 00:24:01.523 ' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:01.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.523 --rc genhtml_branch_coverage=1 00:24:01.523 --rc genhtml_function_coverage=1 00:24:01.523 --rc genhtml_legend=1 00:24:01.523 --rc geninfo_all_blocks=1 00:24:01.523 --rc geninfo_unexecuted_blocks=1 00:24:01.523 00:24:01.523 ' 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.523 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.783 17:41:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:08.354 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:08.354 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:08.354 Found net devices under 0000:86:00.0: cvl_0_0 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.354 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:08.355 Found net devices under 0000:86:00.1: cvl_0_1 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:08.355 17:41:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:08.612 17:41:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:10.517 17:41:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.792 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:15.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:15.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:15.793 Found net devices under 0000:86:00.0: cvl_0_0 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:15.793 Found net devices under 0000:86:00.1: cvl_0_1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:24:15.793 00:24:15.793 --- 10.0.0.2 ping statistics --- 00:24:15.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.793 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:15.793 00:24:15.793 --- 10.0.0.1 ping statistics --- 00:24:15.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.793 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:15.793 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1158603 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1158603 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1158603 ']' 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.056 17:41:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 [2024-10-14 17:41:15.003147] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:24:16.056 [2024-10-14 17:41:15.003190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.056 [2024-10-14 17:41:15.076256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.056 [2024-10-14 17:41:15.119664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.056 [2024-10-14 17:41:15.119699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.056 [2024-10-14 17:41:15.119706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.056 [2024-10-14 17:41:15.119712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.056 [2024-10-14 17:41:15.119717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.056 [2024-10-14 17:41:15.121316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.056 [2024-10-14 17:41:15.121425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.056 [2024-10-14 17:41:15.121530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.056 [2024-10-14 17:41:15.121531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.056 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 [2024-10-14 17:41:15.329922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 Malloc1 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.317 [2024-10-14 17:41:15.391786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1158841 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:16.317 17:41:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:18.853 "tick_rate": 2100000000, 00:24:18.853 "poll_groups": [ 00:24:18.853 { 00:24:18.853 "name": "nvmf_tgt_poll_group_000", 00:24:18.853 "admin_qpairs": 1, 00:24:18.853 "io_qpairs": 1, 00:24:18.853 "current_admin_qpairs": 1, 00:24:18.853 "current_io_qpairs": 1, 00:24:18.853 "pending_bdev_io": 0, 00:24:18.853 "completed_nvme_io": 19404, 00:24:18.853 "transports": [ 00:24:18.853 { 00:24:18.853 "trtype": "TCP" 00:24:18.853 } 00:24:18.853 ] 00:24:18.853 }, 00:24:18.853 { 00:24:18.853 "name": "nvmf_tgt_poll_group_001", 00:24:18.853 "admin_qpairs": 0, 00:24:18.853 "io_qpairs": 1, 00:24:18.853 "current_admin_qpairs": 0, 00:24:18.853 "current_io_qpairs": 1, 00:24:18.853 "pending_bdev_io": 0, 00:24:18.853 "completed_nvme_io": 19406, 00:24:18.853 "transports": [ 00:24:18.853 { 00:24:18.853 "trtype": "TCP" 00:24:18.853 } 00:24:18.853 ] 00:24:18.853 }, 00:24:18.853 { 00:24:18.853 "name": "nvmf_tgt_poll_group_002", 00:24:18.853 "admin_qpairs": 0, 00:24:18.853 "io_qpairs": 1, 00:24:18.853 "current_admin_qpairs": 0, 00:24:18.853 "current_io_qpairs": 1, 00:24:18.853 "pending_bdev_io": 0, 00:24:18.853 "completed_nvme_io": 19403, 00:24:18.853 "transports": [ 00:24:18.853 { 00:24:18.853 "trtype": "TCP" 00:24:18.853 } 00:24:18.853 ] 00:24:18.853 }, 00:24:18.853 { 00:24:18.853 "name": "nvmf_tgt_poll_group_003", 00:24:18.853 "admin_qpairs": 0, 00:24:18.853 "io_qpairs": 1, 00:24:18.853 "current_admin_qpairs": 0, 00:24:18.853 "current_io_qpairs": 1, 00:24:18.853 "pending_bdev_io": 0, 00:24:18.853 "completed_nvme_io": 19305, 00:24:18.853 "transports": [ 00:24:18.853 { 00:24:18.853 "trtype": "TCP" 00:24:18.853 } 00:24:18.853 ] 00:24:18.853 } 00:24:18.853 ] 00:24:18.853 }' 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:18.853 17:41:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1158841 00:24:26.976 Initializing NVMe Controllers 00:24:26.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:26.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:26.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:26.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:26.976 Initialization complete. Launching workers. 00:24:26.976 ======================================================== 00:24:26.976 Latency(us) 00:24:26.976 Device Information : IOPS MiB/s Average min max 00:24:26.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10227.30 39.95 6258.98 2303.85 11770.91 00:24:26.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10370.70 40.51 6170.02 2291.76 13773.46 00:24:26.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10301.00 40.24 6215.77 2123.74 10286.41 00:24:26.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10321.60 40.32 6201.43 1940.10 10608.53 00:24:26.976 ======================================================== 00:24:26.976 Total : 41220.59 161.02 6211.39 1940.10 13773.46 00:24:26.976 00:24:26.976 [2024-10-14 17:41:25.553534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c0620 is same with the state(6) to be set 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.976 rmmod nvme_tcp 00:24:26.976 rmmod nvme_fabrics 00:24:26.976 rmmod nvme_keyring 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1158603 ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1158603 ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1158603' 00:24:26.976 killing process with pid 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1158603 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.976 17:41:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.881 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.881 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:28.881 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:28.881 17:41:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:30.259 17:41:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:32.167 17:41:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:37.443 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:37.443 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:37.443 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.443 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:37.443 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:37.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:37.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:37.444 Found net devices under 0000:86:00.0: cvl_0_0 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:37.444 Found net devices under 0000:86:00.1: cvl_0_1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.444 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:24:37.444 00:24:37.444 --- 10.0.0.2 ping statistics --- 00:24:37.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.444 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:37.445 00:24:37.445 --- 10.0.0.1 ping statistics --- 00:24:37.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.445 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:37.445 net.core.busy_poll = 1 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:37.445 net.core.busy_read = 1 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:37.445 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1162621 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1162621 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1162621 ']' 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.704 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.704 [2024-10-14 17:41:36.817517] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:24:37.704 [2024-10-14 17:41:36.817566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.963 [2024-10-14 17:41:36.890583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.963 [2024-10-14 17:41:36.930574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.963 [2024-10-14 17:41:36.930616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.963 [2024-10-14 17:41:36.930624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.963 [2024-10-14 17:41:36.930630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.963 [2024-10-14 17:41:36.930635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.963 [2024-10-14 17:41:36.932197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.963 [2024-10-14 17:41:36.932305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.963 [2024-10-14 17:41:36.932430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.963 [2024-10-14 17:41:36.932432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.963 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.963 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:24:37.963 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:37.963 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.963 17:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.963 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 [2024-10-14 17:41:37.142029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 Malloc1 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:38.222 [2024-10-14 17:41:37.212525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1162658 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:38.222 17:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:40.127 "tick_rate": 2100000000, 00:24:40.127 "poll_groups": [ 00:24:40.127 { 00:24:40.127 "name": "nvmf_tgt_poll_group_000", 00:24:40.127 "admin_qpairs": 1, 00:24:40.127 "io_qpairs": 2, 00:24:40.127 "current_admin_qpairs": 1, 00:24:40.127 "current_io_qpairs": 2, 00:24:40.127 "pending_bdev_io": 0, 00:24:40.127 "completed_nvme_io": 28797, 00:24:40.127 "transports": [ 00:24:40.127 { 00:24:40.127 "trtype": "TCP" 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 }, 00:24:40.127 { 00:24:40.127 "name": "nvmf_tgt_poll_group_001", 00:24:40.127 "admin_qpairs": 0, 00:24:40.127 "io_qpairs": 2, 00:24:40.127 "current_admin_qpairs": 0, 00:24:40.127 "current_io_qpairs": 2, 00:24:40.127 "pending_bdev_io": 0, 00:24:40.127 "completed_nvme_io": 27964, 00:24:40.127 "transports": [ 00:24:40.127 { 00:24:40.127 "trtype": "TCP" 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 }, 00:24:40.127 { 00:24:40.127 "name": "nvmf_tgt_poll_group_002", 00:24:40.127 "admin_qpairs": 0, 00:24:40.127 "io_qpairs": 0, 00:24:40.127 "current_admin_qpairs": 0, 00:24:40.127 "current_io_qpairs": 0, 00:24:40.127 "pending_bdev_io": 0, 00:24:40.127 "completed_nvme_io": 0, 00:24:40.127 "transports": [ 00:24:40.127 { 00:24:40.127 "trtype": "TCP" 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 }, 00:24:40.127 { 00:24:40.127 "name": "nvmf_tgt_poll_group_003", 00:24:40.127 "admin_qpairs": 0, 00:24:40.127 "io_qpairs": 0, 00:24:40.127 "current_admin_qpairs": 0, 00:24:40.127 "current_io_qpairs": 0, 00:24:40.127 "pending_bdev_io": 0, 00:24:40.127 "completed_nvme_io": 0, 00:24:40.127 "transports": [ 00:24:40.127 { 00:24:40.127 "trtype": "TCP" 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 } 00:24:40.127 ] 00:24:40.127 }' 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:40.127 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:40.386 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:24:40.386 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:24:40.386 17:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1162658 00:24:48.506 Initializing NVMe Controllers 00:24:48.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:48.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:48.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:48.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:48.506 Initialization complete. Launching workers. 00:24:48.506 ======================================================== 00:24:48.506 Latency(us) 00:24:48.506 Device Information : IOPS MiB/s Average min max 00:24:48.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7716.20 30.14 8292.96 1465.82 52850.14 00:24:48.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7661.30 29.93 8369.46 1468.18 52803.19 00:24:48.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7297.41 28.51 8770.19 1333.38 53156.73 00:24:48.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7175.01 28.03 8944.76 1161.75 52028.28 00:24:48.506 ======================================================== 00:24:48.506 Total : 29849.91 116.60 8585.93 1161.75 53156.73 00:24:48.506 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.506 rmmod nvme_tcp 00:24:48.506 rmmod nvme_fabrics 00:24:48.506 rmmod nvme_keyring 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1162621 ']' 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1162621 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1162621 ']' 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1162621 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1162621 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1162621' 00:24:48.506 killing process with pid 1162621 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1162621 00:24:48.506 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1162621 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.765 17:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:50.669 00:24:50.669 real 0m49.283s 00:24:50.669 user 2m43.871s 00:24:50.669 sys 0m10.329s 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:50.669 ************************************ 00:24:50.669 END TEST nvmf_perf_adq 00:24:50.669 ************************************ 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.669 17:41:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:50.930 ************************************ 00:24:50.930 START TEST nvmf_shutdown 00:24:50.930 ************************************ 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:50.930 * Looking for test storage... 00:24:50.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.930 17:41:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:50.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.930 --rc genhtml_branch_coverage=1 00:24:50.930 --rc genhtml_function_coverage=1 00:24:50.930 --rc genhtml_legend=1 00:24:50.930 --rc geninfo_all_blocks=1 00:24:50.930 --rc geninfo_unexecuted_blocks=1 00:24:50.930 00:24:50.930 ' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:50.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.930 --rc genhtml_branch_coverage=1 00:24:50.930 --rc genhtml_function_coverage=1 00:24:50.930 --rc genhtml_legend=1 00:24:50.930 --rc geninfo_all_blocks=1 00:24:50.930 --rc geninfo_unexecuted_blocks=1 00:24:50.930 00:24:50.930 ' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:50.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.930 --rc genhtml_branch_coverage=1 00:24:50.930 --rc genhtml_function_coverage=1 00:24:50.930 --rc genhtml_legend=1 00:24:50.930 --rc geninfo_all_blocks=1 00:24:50.930 --rc geninfo_unexecuted_blocks=1 00:24:50.930 00:24:50.930 ' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:50.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.930 --rc genhtml_branch_coverage=1 00:24:50.930 --rc genhtml_function_coverage=1 00:24:50.930 --rc genhtml_legend=1 00:24:50.930 --rc geninfo_all_blocks=1 00:24:50.930 --rc geninfo_unexecuted_blocks=1 00:24:50.930 00:24:50.930 ' 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.930 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.931 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:51.190 ************************************ 00:24:51.190 START TEST nvmf_shutdown_tc1 00:24:51.190 ************************************ 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.190 17:41:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:57.760 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.761 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.761 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.761 17:41:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:24:57.761 00:24:57.761 --- 10.0.0.2 ping statistics --- 00:24:57.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.761 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:57.761 00:24:57.761 --- 10.0.0.1 ping statistics --- 00:24:57.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.761 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:57.761 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1167881 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1167881 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1167881 ']' 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 [2024-10-14 17:41:56.142801] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:24:57.762 [2024-10-14 17:41:56.142850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.762 [2024-10-14 17:41:56.215614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.762 [2024-10-14 17:41:56.258107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.762 [2024-10-14 17:41:56.258142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.762 [2024-10-14 17:41:56.258148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.762 [2024-10-14 17:41:56.258154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.762 [2024-10-14 17:41:56.258159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.762 [2024-10-14 17:41:56.259651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.762 [2024-10-14 17:41:56.259757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.762 [2024-10-14 17:41:56.259863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.762 [2024-10-14 17:41:56.259864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 [2024-10-14 17:41:56.396586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.762 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.762 Malloc1 00:24:57.762 [2024-10-14 17:41:56.512195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.762 Malloc2 00:24:57.762 Malloc3 00:24:57.762 Malloc4 00:24:57.762 Malloc5 00:24:57.762 Malloc6 00:24:57.762 Malloc7 00:24:57.762 Malloc8 00:24:57.762 Malloc9 00:24:57.762 Malloc10 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1168155 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1168155 /var/tmp/bdevperf.sock 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1168155 ']' 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.022 { 00:24:58.022 "params": { 00:24:58.022 "name": "Nvme$subsystem", 00:24:58.022 "trtype": "$TEST_TRANSPORT", 00:24:58.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.022 "adrfam": "ipv4", 00:24:58.022 "trsvcid": "$NVMF_PORT", 00:24:58.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.022 "hdgst": ${hdgst:-false}, 00:24:58.022 "ddgst": ${ddgst:-false} 00:24:58.022 }, 00:24:58.022 "method": "bdev_nvme_attach_controller" 00:24:58.022 } 00:24:58.022 EOF 00:24:58.022 )") 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.022 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.022 { 00:24:58.022 "params": { 00:24:58.022 "name": "Nvme$subsystem", 00:24:58.022 "trtype": "$TEST_TRANSPORT", 00:24:58.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.022 "adrfam": "ipv4", 00:24:58.022 "trsvcid": "$NVMF_PORT", 00:24:58.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.022 "hdgst": ${hdgst:-false}, 00:24:58.022 "ddgst": ${ddgst:-false} 00:24:58.022 }, 00:24:58.022 "method": "bdev_nvme_attach_controller" 00:24:58.022 } 00:24:58.022 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 [2024-10-14 17:41:56.979878] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:24:58.023 [2024-10-14 17:41:56.979929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:58.023 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:58.023 { 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme$subsystem", 00:24:58.023 "trtype": "$TEST_TRANSPORT", 00:24:58.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "$NVMF_PORT", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.023 "hdgst": ${hdgst:-false}, 00:24:58.023 "ddgst": ${ddgst:-false} 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 } 00:24:58.023 EOF 00:24:58.023 )") 00:24:58.023 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:58.023 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:24:58.023 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:24:58.023 17:41:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme1", 00:24:58.023 "trtype": "tcp", 00:24:58.023 "traddr": "10.0.0.2", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "4420", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:58.023 "hdgst": false, 00:24:58.023 "ddgst": false 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 },{ 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme2", 00:24:58.023 "trtype": "tcp", 00:24:58.023 "traddr": "10.0.0.2", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "4420", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:58.023 "hdgst": false, 00:24:58.023 "ddgst": false 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 },{ 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme3", 00:24:58.023 "trtype": "tcp", 00:24:58.023 "traddr": "10.0.0.2", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "4420", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:58.023 "hdgst": false, 00:24:58.023 "ddgst": false 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 },{ 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme4", 00:24:58.023 "trtype": "tcp", 00:24:58.023 "traddr": "10.0.0.2", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "4420", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:58.023 "hdgst": false, 00:24:58.023 "ddgst": false 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 },{ 00:24:58.023 "params": { 00:24:58.023 "name": "Nvme5", 00:24:58.023 "trtype": "tcp", 00:24:58.023 "traddr": "10.0.0.2", 00:24:58.023 "adrfam": "ipv4", 00:24:58.023 "trsvcid": "4420", 00:24:58.023 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:58.023 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:58.023 "hdgst": false, 00:24:58.023 "ddgst": false 00:24:58.023 }, 00:24:58.023 "method": "bdev_nvme_attach_controller" 00:24:58.023 },{ 00:24:58.023 "params": { 00:24:58.024 "name": "Nvme6", 00:24:58.024 "trtype": "tcp", 00:24:58.024 "traddr": "10.0.0.2", 00:24:58.024 "adrfam": "ipv4", 00:24:58.024 "trsvcid": "4420", 00:24:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:58.024 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:58.024 "hdgst": false, 00:24:58.024 "ddgst": false 00:24:58.024 }, 00:24:58.024 "method": "bdev_nvme_attach_controller" 00:24:58.024 },{ 00:24:58.024 "params": { 00:24:58.024 "name": "Nvme7", 00:24:58.024 "trtype": "tcp", 00:24:58.024 "traddr": "10.0.0.2", 00:24:58.024 "adrfam": "ipv4", 00:24:58.024 "trsvcid": "4420", 00:24:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:58.024 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:58.024 "hdgst": false, 00:24:58.024 "ddgst": false 00:24:58.024 }, 00:24:58.024 "method": "bdev_nvme_attach_controller" 00:24:58.024 },{ 00:24:58.024 "params": { 00:24:58.024 "name": "Nvme8", 00:24:58.024 "trtype": "tcp", 00:24:58.024 "traddr": "10.0.0.2", 00:24:58.024 "adrfam": "ipv4", 00:24:58.024 "trsvcid": "4420", 00:24:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:58.024 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:58.024 "hdgst": false, 00:24:58.024 "ddgst": false 00:24:58.024 }, 00:24:58.024 "method": "bdev_nvme_attach_controller" 00:24:58.024 },{ 00:24:58.024 "params": { 00:24:58.024 "name": "Nvme9", 00:24:58.024 "trtype": "tcp", 00:24:58.024 "traddr": "10.0.0.2", 00:24:58.024 "adrfam": "ipv4", 00:24:58.024 "trsvcid": "4420", 00:24:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:58.024 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:58.024 "hdgst": false, 00:24:58.024 "ddgst": false 00:24:58.024 }, 00:24:58.024 "method": "bdev_nvme_attach_controller" 00:24:58.024 },{ 00:24:58.024 "params": { 00:24:58.024 "name": "Nvme10", 00:24:58.024 "trtype": "tcp", 00:24:58.024 "traddr": "10.0.0.2", 00:24:58.024 "adrfam": "ipv4", 00:24:58.024 "trsvcid": "4420", 00:24:58.024 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:58.024 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:58.024 "hdgst": false, 00:24:58.024 "ddgst": false 00:24:58.024 }, 00:24:58.024 "method": "bdev_nvme_attach_controller" 00:24:58.024 }' 00:24:58.024 [2024-10-14 17:41:57.051089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.024 [2024-10-14 17:41:57.091952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1168155 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:59.934 17:41:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:00.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1168155 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1167881 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.951 { 00:25:00.951 "params": { 00:25:00.951 "name": "Nvme$subsystem", 00:25:00.951 "trtype": "$TEST_TRANSPORT", 00:25:00.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.951 "adrfam": "ipv4", 00:25:00.951 "trsvcid": "$NVMF_PORT", 00:25:00.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.951 "hdgst": ${hdgst:-false}, 00:25:00.951 "ddgst": ${ddgst:-false} 00:25:00.951 }, 00:25:00.951 "method": "bdev_nvme_attach_controller" 00:25:00.951 } 00:25:00.951 EOF 00:25:00.951 )") 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.951 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.951 { 00:25:00.951 "params": { 00:25:00.951 "name": "Nvme$subsystem", 00:25:00.951 "trtype": "$TEST_TRANSPORT", 00:25:00.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 [2024-10-14 17:41:59.909759] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:00.952 [2024-10-14 17:41:59.909808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168647 ] 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:00.952 { 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme$subsystem", 00:25:00.952 "trtype": "$TEST_TRANSPORT", 00:25:00.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "$NVMF_PORT", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.952 "hdgst": ${hdgst:-false}, 00:25:00.952 "ddgst": ${ddgst:-false} 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 } 00:25:00.952 EOF 00:25:00.952 )") 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:25:00.952 17:41:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme1", 00:25:00.952 "trtype": "tcp", 00:25:00.952 "traddr": "10.0.0.2", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "4420", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:00.952 "hdgst": false, 00:25:00.952 "ddgst": false 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 },{ 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme2", 00:25:00.952 "trtype": "tcp", 00:25:00.952 "traddr": "10.0.0.2", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "4420", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:00.952 "hdgst": false, 00:25:00.952 "ddgst": false 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 },{ 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme3", 00:25:00.952 "trtype": "tcp", 00:25:00.952 "traddr": "10.0.0.2", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "4420", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:00.952 "hdgst": false, 00:25:00.952 "ddgst": false 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 },{ 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme4", 00:25:00.952 "trtype": "tcp", 00:25:00.952 "traddr": "10.0.0.2", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "4420", 00:25:00.952 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:00.952 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:00.952 "hdgst": false, 00:25:00.952 "ddgst": false 00:25:00.952 }, 00:25:00.952 "method": "bdev_nvme_attach_controller" 00:25:00.952 },{ 00:25:00.952 "params": { 00:25:00.952 "name": "Nvme5", 00:25:00.952 "trtype": "tcp", 00:25:00.952 "traddr": "10.0.0.2", 00:25:00.952 "adrfam": "ipv4", 00:25:00.952 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 },{ 00:25:00.953 "params": { 00:25:00.953 "name": "Nvme6", 00:25:00.953 "trtype": "tcp", 00:25:00.953 "traddr": "10.0.0.2", 00:25:00.953 "adrfam": "ipv4", 00:25:00.953 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 },{ 00:25:00.953 "params": { 00:25:00.953 "name": "Nvme7", 00:25:00.953 "trtype": "tcp", 00:25:00.953 "traddr": "10.0.0.2", 00:25:00.953 "adrfam": "ipv4", 00:25:00.953 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 },{ 00:25:00.953 "params": { 00:25:00.953 "name": "Nvme8", 00:25:00.953 "trtype": "tcp", 00:25:00.953 "traddr": "10.0.0.2", 00:25:00.953 "adrfam": "ipv4", 00:25:00.953 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 },{ 00:25:00.953 "params": { 00:25:00.953 "name": "Nvme9", 00:25:00.953 "trtype": "tcp", 00:25:00.953 "traddr": "10.0.0.2", 00:25:00.953 "adrfam": "ipv4", 00:25:00.953 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 },{ 00:25:00.953 "params": { 00:25:00.953 "name": "Nvme10", 00:25:00.953 "trtype": "tcp", 00:25:00.953 "traddr": "10.0.0.2", 00:25:00.953 "adrfam": "ipv4", 00:25:00.953 "trsvcid": "4420", 00:25:00.953 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:00.953 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:00.953 "hdgst": false, 00:25:00.953 "ddgst": false 00:25:00.953 }, 00:25:00.953 "method": "bdev_nvme_attach_controller" 00:25:00.953 }' 00:25:00.953 [2024-10-14 17:41:59.981501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.953 [2024-10-14 17:42:00.028778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.329 Running I/O for 1 seconds... 00:25:03.523 2247.00 IOPS, 140.44 MiB/s 00:25:03.523 Latency(us) 00:25:03.523 [2024-10-14T15:42:02.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme1n1 : 1.14 280.95 17.56 0.00 0.00 225480.41 15791.06 214708.42 00:25:03.523 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme2n1 : 1.05 249.97 15.62 0.00 0.00 244154.29 8113.98 229688.08 00:25:03.523 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme3n1 : 1.10 301.51 18.84 0.00 0.00 196622.29 13169.62 207717.91 00:25:03.523 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme4n1 : 1.13 283.87 17.74 0.00 0.00 213094.89 14917.24 208716.56 00:25:03.523 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme5n1 : 1.15 283.65 17.73 0.00 0.00 210729.27 1451.15 212711.13 00:25:03.523 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme6n1 : 1.16 276.54 17.28 0.00 0.00 213560.17 17850.76 219701.64 00:25:03.523 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme7n1 : 1.14 281.55 17.60 0.00 0.00 206331.12 21720.50 210713.84 00:25:03.523 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme8n1 : 1.15 280.00 17.50 0.00 0.00 204521.34 1006.45 215707.06 00:25:03.523 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme9n1 : 1.15 277.15 17.32 0.00 0.00 203764.10 16727.28 215707.06 00:25:03.523 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.523 Verification LBA range: start 0x0 length 0x400 00:25:03.523 Nvme10n1 : 1.16 275.70 17.23 0.00 0.00 202016.87 15978.30 231685.36 00:25:03.523 [2024-10-14T15:42:02.661Z] =================================================================================================================== 00:25:03.523 [2024-10-14T15:42:02.662Z] Total : 2790.89 174.43 0.00 0.00 211383.07 1006.45 231685.36 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:03.524 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.782 rmmod nvme_tcp 00:25:03.782 rmmod nvme_fabrics 00:25:03.782 rmmod nvme_keyring 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1167881 ']' 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1167881 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1167881 ']' 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1167881 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1167881 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1167881' 00:25:03.782 killing process with pid 1167881 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1167881 00:25:03.782 17:42:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1167881 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.041 17:42:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.578 00:25:06.578 real 0m15.115s 00:25:06.578 user 0m33.239s 00:25:06.578 sys 0m5.744s 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:06.578 ************************************ 00:25:06.578 END TEST nvmf_shutdown_tc1 00:25:06.578 ************************************ 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:06.578 ************************************ 00:25:06.578 START TEST nvmf_shutdown_tc2 00:25:06.578 ************************************ 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.578 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:06.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:06.579 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:06.579 Found net devices under 0000:86:00.0: cvl_0_0 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:06.579 Found net devices under 0000:86:00.1: cvl_0_1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:25:06.579 00:25:06.579 --- 10.0.0.2 ping statistics --- 00:25:06.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.579 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:06.579 00:25:06.579 --- 10.0.0.1 ping statistics --- 00:25:06.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.579 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:06.579 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1169797 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1169797 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1169797 ']' 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.580 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.580 [2024-10-14 17:42:05.665533] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:06.580 [2024-10-14 17:42:05.665583] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.839 [2024-10-14 17:42:05.736355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.839 [2024-10-14 17:42:05.777563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.839 [2024-10-14 17:42:05.777608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.839 [2024-10-14 17:42:05.777617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.839 [2024-10-14 17:42:05.777624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.839 [2024-10-14 17:42:05.777630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.839 [2024-10-14 17:42:05.779216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.839 [2024-10-14 17:42:05.779322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.839 [2024-10-14 17:42:05.779429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.839 [2024-10-14 17:42:05.779429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.839 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.839 [2024-10-14 17:42:05.924209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:06.840 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:07.099 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:07.099 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.099 17:42:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.099 Malloc1 00:25:07.099 [2024-10-14 17:42:06.041547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.099 Malloc2 00:25:07.099 Malloc3 00:25:07.099 Malloc4 00:25:07.099 Malloc5 00:25:07.099 Malloc6 00:25:07.358 Malloc7 00:25:07.358 Malloc8 00:25:07.358 Malloc9 00:25:07.358 Malloc10 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1169876 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1169876 /var/tmp/bdevperf.sock 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1169876 ']' 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.358 { 00:25:07.358 "params": { 00:25:07.358 "name": "Nvme$subsystem", 00:25:07.358 "trtype": "$TEST_TRANSPORT", 00:25:07.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.358 "adrfam": "ipv4", 00:25:07.358 "trsvcid": "$NVMF_PORT", 00:25:07.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.358 "hdgst": ${hdgst:-false}, 00:25:07.358 "ddgst": ${ddgst:-false} 00:25:07.358 }, 00:25:07.358 "method": "bdev_nvme_attach_controller" 00:25:07.358 } 00:25:07.358 EOF 00:25:07.358 )") 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.358 { 00:25:07.358 "params": { 00:25:07.358 "name": "Nvme$subsystem", 00:25:07.358 "trtype": "$TEST_TRANSPORT", 00:25:07.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.358 "adrfam": "ipv4", 00:25:07.358 "trsvcid": "$NVMF_PORT", 00:25:07.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.358 "hdgst": ${hdgst:-false}, 00:25:07.358 "ddgst": ${ddgst:-false} 00:25:07.358 }, 00:25:07.358 "method": "bdev_nvme_attach_controller" 00:25:07.358 } 00:25:07.358 EOF 00:25:07.358 )") 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.358 { 00:25:07.358 "params": { 00:25:07.358 "name": "Nvme$subsystem", 00:25:07.358 "trtype": "$TEST_TRANSPORT", 00:25:07.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.358 "adrfam": "ipv4", 00:25:07.358 "trsvcid": "$NVMF_PORT", 00:25:07.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.358 "hdgst": ${hdgst:-false}, 00:25:07.358 "ddgst": ${ddgst:-false} 00:25:07.358 }, 00:25:07.358 "method": "bdev_nvme_attach_controller" 00:25:07.358 } 00:25:07.358 EOF 00:25:07.358 )") 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.358 { 00:25:07.358 "params": { 00:25:07.358 "name": "Nvme$subsystem", 00:25:07.358 "trtype": "$TEST_TRANSPORT", 00:25:07.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.358 "adrfam": "ipv4", 00:25:07.358 "trsvcid": "$NVMF_PORT", 00:25:07.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.358 "hdgst": ${hdgst:-false}, 00:25:07.358 "ddgst": ${ddgst:-false} 00:25:07.358 }, 00:25:07.358 "method": "bdev_nvme_attach_controller" 00:25:07.358 } 00:25:07.358 EOF 00:25:07.358 )") 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.358 { 00:25:07.358 "params": { 00:25:07.358 "name": "Nvme$subsystem", 00:25:07.358 "trtype": "$TEST_TRANSPORT", 00:25:07.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.358 "adrfam": "ipv4", 00:25:07.358 "trsvcid": "$NVMF_PORT", 00:25:07.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.358 "hdgst": ${hdgst:-false}, 00:25:07.358 "ddgst": ${ddgst:-false} 00:25:07.358 }, 00:25:07.358 "method": "bdev_nvme_attach_controller" 00:25:07.358 } 00:25:07.358 EOF 00:25:07.358 )") 00:25:07.358 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.618 { 00:25:07.618 "params": { 00:25:07.618 "name": "Nvme$subsystem", 00:25:07.618 "trtype": "$TEST_TRANSPORT", 00:25:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.618 "adrfam": "ipv4", 00:25:07.618 "trsvcid": "$NVMF_PORT", 00:25:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.618 "hdgst": ${hdgst:-false}, 00:25:07.618 "ddgst": ${ddgst:-false} 00:25:07.618 }, 00:25:07.618 "method": "bdev_nvme_attach_controller" 00:25:07.618 } 00:25:07.618 EOF 00:25:07.618 )") 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.618 { 00:25:07.618 "params": { 00:25:07.618 "name": "Nvme$subsystem", 00:25:07.618 "trtype": "$TEST_TRANSPORT", 00:25:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.618 "adrfam": "ipv4", 00:25:07.618 "trsvcid": "$NVMF_PORT", 00:25:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.618 "hdgst": ${hdgst:-false}, 00:25:07.618 "ddgst": ${ddgst:-false} 00:25:07.618 }, 00:25:07.618 "method": "bdev_nvme_attach_controller" 00:25:07.618 } 00:25:07.618 EOF 00:25:07.618 )") 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.618 [2024-10-14 17:42:06.512852] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:07.618 [2024-10-14 17:42:06.512903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169876 ] 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.618 { 00:25:07.618 "params": { 00:25:07.618 "name": "Nvme$subsystem", 00:25:07.618 "trtype": "$TEST_TRANSPORT", 00:25:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.618 "adrfam": "ipv4", 00:25:07.618 "trsvcid": "$NVMF_PORT", 00:25:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.618 "hdgst": ${hdgst:-false}, 00:25:07.618 "ddgst": ${ddgst:-false} 00:25:07.618 }, 00:25:07.618 "method": "bdev_nvme_attach_controller" 00:25:07.618 } 00:25:07.618 EOF 00:25:07.618 )") 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.618 { 00:25:07.618 "params": { 00:25:07.618 "name": "Nvme$subsystem", 00:25:07.618 "trtype": "$TEST_TRANSPORT", 00:25:07.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.618 "adrfam": "ipv4", 00:25:07.618 "trsvcid": "$NVMF_PORT", 00:25:07.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.618 "hdgst": ${hdgst:-false}, 00:25:07.618 "ddgst": ${ddgst:-false} 00:25:07.618 }, 00:25:07.618 "method": "bdev_nvme_attach_controller" 00:25:07.618 } 00:25:07.618 EOF 00:25:07.618 )") 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:07.618 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:07.618 { 00:25:07.618 "params": { 00:25:07.619 "name": "Nvme$subsystem", 00:25:07.619 "trtype": "$TEST_TRANSPORT", 00:25:07.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "$NVMF_PORT", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.619 "hdgst": ${hdgst:-false}, 00:25:07.619 "ddgst": ${ddgst:-false} 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 } 00:25:07.619 EOF 00:25:07.619 )") 00:25:07.619 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:25:07.619 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:25:07.619 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:25:07.619 17:42:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme1", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme2", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme3", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme4", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme5", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme6", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme7", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme8", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme9", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 },{ 00:25:07.619 "params": { 00:25:07.619 "name": "Nvme10", 00:25:07.619 "trtype": "tcp", 00:25:07.619 "traddr": "10.0.0.2", 00:25:07.619 "adrfam": "ipv4", 00:25:07.619 "trsvcid": "4420", 00:25:07.619 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:07.619 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:07.619 "hdgst": false, 00:25:07.619 "ddgst": false 00:25:07.619 }, 00:25:07.619 "method": "bdev_nvme_attach_controller" 00:25:07.619 }' 00:25:07.619 [2024-10-14 17:42:06.584708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.619 [2024-10-14 17:42:06.625932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.996 Running I/O for 10 seconds... 00:25:09.563 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:09.564 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1169876 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1169876 ']' 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1169876 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1169876 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1169876' 00:25:09.823 killing process with pid 1169876 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1169876 00:25:09.823 17:42:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1169876 00:25:09.823 Received shutdown signal, test time was about 0.963385 seconds 00:25:09.823 00:25:09.823 Latency(us) 00:25:09.823 [2024-10-14T15:42:08.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.823 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme1n1 : 0.95 272.99 17.06 0.00 0.00 231166.90 2418.59 200727.41 00:25:09.823 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme2n1 : 0.95 268.08 16.76 0.00 0.00 232029.14 18225.25 218702.99 00:25:09.823 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme3n1 : 0.93 276.41 17.28 0.00 0.00 220911.91 13731.35 218702.99 00:25:09.823 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme4n1 : 0.96 345.75 21.61 0.00 0.00 172735.92 6428.77 207717.91 00:25:09.823 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme5n1 : 0.93 286.98 17.94 0.00 0.00 204348.13 3105.16 208716.56 00:25:09.823 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme6n1 : 0.94 272.77 17.05 0.00 0.00 212413.44 15104.49 221698.93 00:25:09.823 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.823 Verification LBA range: start 0x0 length 0x400 00:25:09.823 Nvme7n1 : 0.95 270.38 16.90 0.00 0.00 210714.09 31332.45 202724.69 00:25:09.824 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.824 Verification LBA range: start 0x0 length 0x400 00:25:09.824 Nvme8n1 : 0.94 271.02 16.94 0.00 0.00 206189.47 19598.38 183750.46 00:25:09.824 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.824 Verification LBA range: start 0x0 length 0x400 00:25:09.824 Nvme9n1 : 0.96 270.39 16.90 0.00 0.00 203189.15 2761.87 226692.14 00:25:09.824 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.824 Verification LBA range: start 0x0 length 0x400 00:25:09.824 Nvme10n1 : 0.96 271.10 16.94 0.00 0.00 199014.54 2559.02 234681.30 00:25:09.824 [2024-10-14T15:42:08.962Z] =================================================================================================================== 00:25:09.824 [2024-10-14T15:42:08.962Z] Total : 2805.88 175.37 0.00 0.00 208223.43 2418.59 234681.30 00:25:10.082 17:42:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1169797 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:11.018 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.019 rmmod nvme_tcp 00:25:11.019 rmmod nvme_fabrics 00:25:11.019 rmmod nvme_keyring 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1169797 ']' 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1169797 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1169797 ']' 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1169797 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.019 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1169797 00:25:11.278 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.278 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.278 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1169797' 00:25:11.278 killing process with pid 1169797 00:25:11.278 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1169797 00:25:11.278 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1169797 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.537 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.538 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.538 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.538 17:42:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.073 00:25:14.073 real 0m7.340s 00:25:14.073 user 0m21.468s 00:25:14.073 sys 0m1.346s 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:14.073 ************************************ 00:25:14.073 END TEST nvmf_shutdown_tc2 00:25:14.073 ************************************ 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:14.073 ************************************ 00:25:14.073 START TEST nvmf_shutdown_tc3 00:25:14.073 ************************************ 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.073 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.073 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.073 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.073 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:25:14.074 00:25:14.074 --- 10.0.0.2 ping statistics --- 00:25:14.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.074 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:14.074 00:25:14.074 --- 10.0.0.1 ping statistics --- 00:25:14.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.074 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:14.074 17:42:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1171501 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1171501 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1171501 ']' 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.074 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.074 [2024-10-14 17:42:13.062895] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:14.074 [2024-10-14 17:42:13.062940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.074 [2024-10-14 17:42:13.134839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.074 [2024-10-14 17:42:13.176711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.074 [2024-10-14 17:42:13.176747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.074 [2024-10-14 17:42:13.176755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.074 [2024-10-14 17:42:13.176760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.074 [2024-10-14 17:42:13.176765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.074 [2024-10-14 17:42:13.178315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.074 [2024-10-14 17:42:13.178423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.074 [2024-10-14 17:42:13.178533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.074 [2024-10-14 17:42:13.178535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.332 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.332 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:14.332 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.333 [2024-10-14 17:42:13.315409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.333 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.333 Malloc1 00:25:14.333 [2024-10-14 17:42:13.427761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.333 Malloc2 00:25:14.592 Malloc3 00:25:14.592 Malloc4 00:25:14.592 Malloc5 00:25:14.592 Malloc6 00:25:14.592 Malloc7 00:25:14.592 Malloc8 00:25:14.852 Malloc9 00:25:14.852 Malloc10 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1171731 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1171731 /var/tmp/bdevperf.sock 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1171731 ']' 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.852 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 [2024-10-14 17:42:13.906719] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:14.853 [2024-10-14 17:42:13.906768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171731 ] 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:14.853 { 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme$subsystem", 00:25:14.853 "trtype": "$TEST_TRANSPORT", 00:25:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "$NVMF_PORT", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.853 "hdgst": ${hdgst:-false}, 00:25:14.853 "ddgst": ${ddgst:-false} 00:25:14.853 }, 00:25:14.853 "method": "bdev_nvme_attach_controller" 00:25:14.853 } 00:25:14.853 EOF 00:25:14.853 )") 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:25:14.853 17:42:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:25:14.853 "params": { 00:25:14.853 "name": "Nvme1", 00:25:14.853 "trtype": "tcp", 00:25:14.853 "traddr": "10.0.0.2", 00:25:14.853 "adrfam": "ipv4", 00:25:14.853 "trsvcid": "4420", 00:25:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.853 "hdgst": false, 00:25:14.853 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme2", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme3", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme4", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme5", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme6", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme7", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme8", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme9", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 },{ 00:25:14.854 "params": { 00:25:14.854 "name": "Nvme10", 00:25:14.854 "trtype": "tcp", 00:25:14.854 "traddr": "10.0.0.2", 00:25:14.854 "adrfam": "ipv4", 00:25:14.854 "trsvcid": "4420", 00:25:14.854 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:14.854 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:14.854 "hdgst": false, 00:25:14.854 "ddgst": false 00:25:14.854 }, 00:25:14.854 "method": "bdev_nvme_attach_controller" 00:25:14.854 }' 00:25:14.854 [2024-10-14 17:42:13.975571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.114 [2024-10-14 17:42:14.018171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.492 Running I/O for 10 seconds... 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:16.751 17:42:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:17.010 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.270 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:17.547 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1171501 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1171501 ']' 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1171501 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1171501 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1171501' 00:25:17.548 killing process with pid 1171501 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1171501 00:25:17.548 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1171501 00:25:17.548 [2024-10-14 17:42:16.487680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.487998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.488094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf030 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.548 [2024-10-14 17:42:16.490259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.490570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf520 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.491999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.549 [2024-10-14 17:42:16.492116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.492249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf9f0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.493999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.550 [2024-10-14 17:42:16.494179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc03b0 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70270 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6fe10 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6caa0 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6da30 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.494737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.551 [2024-10-14 17:42:16.494784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.494790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4c30 is same with the state(6) to be set 00:25:17.551 [2024-10-14 17:42:16.495244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-10-14 17:42:16.495418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-10-14 17:42:16.495426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-10-14 17:42:16.495881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-10-14 17:42:16.495889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.495991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.495997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-10-14 17:42:16.496183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-10-14 17:42:16.496228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496259] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2272580 was disconnected and freed. reset controller. 00:25:17.553 [2024-10-14 17:42:16.496265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.553 [2024-10-14 17:42:16.496487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.496620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0880 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.497962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc0d50 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.498767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.498790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.498798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.554 [2024-10-14 17:42:16.498804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.498995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1240 is same with the state(6) to be set 00:25:17.555 [2024-10-14 17:42:16.499967] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:17.555 [2024-10-14 17:42:16.500002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4c30 (9): Bad file descriptor 00:25:17.555 [2024-10-14 17:42:16.500514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-10-14 17:42:16.500714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-10-14 17:42:16.500722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.500987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.500994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-10-14 17:42:16.501238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-10-14 17:42:16.501244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.501252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.501259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.501267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.501274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.501288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.501296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.501302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-10-14 17:42:16.515319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515425] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22710a0 was disconnected and freed. reset controller. 00:25:17.557 [2024-10-14 17:42:16.515552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2890 is same with the state(6) to be set 00:25:17.557 [2024-10-14 17:42:16.515684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ca950 is same with the state(6) to be set 00:25:17.557 [2024-10-14 17:42:16.515793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.515868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85610 is same with the state(6) to be set 00:25:17.557 [2024-10-14 17:42:16.515892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e70270 (9): Bad file descriptor 00:25:17.557 [2024-10-14 17:42:16.515914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6fe10 (9): Bad file descriptor 00:25:17.557 [2024-10-14 17:42:16.515931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6caa0 (9): Bad file descriptor 00:25:17.557 [2024-10-14 17:42:16.515950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6da30 (9): Bad file descriptor 00:25:17.557 [2024-10-14 17:42:16.515988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2290e90 is same with the state(6) to be set 00:25:17.557 [2024-10-14 17:42:16.516094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.557 [2024-10-14 17:42:16.516159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-10-14 17:42:16.516168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b7f0 is same with the state(6) to be set 00:25:17.558 [2024-10-14 17:42:16.517012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-10-14 17:42:16.517851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-10-14 17:42:16.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.517985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.517994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.518206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.518233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.559 [2024-10-14 17:42:16.518290] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x226cff0 was disconnected and freed. reset controller. 00:25:17.559 [2024-10-14 17:42:16.520296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.559 [2024-10-14 17:42:16.520323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d4c30 with addr=10.0.0.2, port=4420 00:25:17.559 [2024-10-14 17:42:16.520334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4c30 is same with the state(6) to be set 00:25:17.559 [2024-10-14 17:42:16.520405] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-10-14 17:42:16.520461] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-10-14 17:42:16.520510] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-10-14 17:42:16.521906] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:17.559 [2024-10-14 17:42:16.521933] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:17.559 [2024-10-14 17:42:16.521958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2890 (9): Bad file descriptor 00:25:17.559 [2024-10-14 17:42:16.521973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4c30 (9): Bad file descriptor 00:25:17.559 [2024-10-14 17:42:16.521999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-10-14 17:42:16.522348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-10-14 17:42:16.522358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.522986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.522995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-10-14 17:42:16.523169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-10-14 17:42:16.523181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393570 is same with the state(6) to be set 00:25:17.561 [2024-10-14 17:42:16.523338] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2393570 was disconnected and freed. reset controller. 00:25:17.561 [2024-10-14 17:42:16.523377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.523988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.523998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-10-14 17:42:16.524007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-10-14 17:42:16.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-10-14 17:42:16.524577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-10-14 17:42:16.524585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2395a70 is same with the state(6) to be set 00:25:17.562 [2024-10-14 17:42:16.524664] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2395a70 was disconnected and freed. reset controller. 00:25:17.562 [2024-10-14 17:42:16.524726] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.562 [2024-10-14 17:42:16.524775] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.562 [2024-10-14 17:42:16.525303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-10-14 17:42:16.525320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6caa0 with addr=10.0.0.2, port=4420 00:25:17.562 [2024-10-14 17:42:16.525329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6caa0 is same with the state(6) to be set 00:25:17.562 [2024-10-14 17:42:16.525348] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:17.562 [2024-10-14 17:42:16.525355] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:17.562 [2024-10-14 17:42:16.525364] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:17.562 [2024-10-14 17:42:16.527777] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.562 [2024-10-14 17:42:16.527843] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.562 [2024-10-14 17:42:16.527866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.562 [2024-10-14 17:42:16.527876] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.562 [2024-10-14 17:42:16.527887] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:17.562 [2024-10-14 17:42:16.528022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.562 [2024-10-14 17:42:16.528037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2890 with addr=10.0.0.2, port=4420 00:25:17.562 [2024-10-14 17:42:16.528046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2890 is same with the state(6) to be set 00:25:17.562 [2024-10-14 17:42:16.528057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6caa0 (9): Bad file descriptor 00:25:17.562 [2024-10-14 17:42:16.528071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ca950 (9): Bad file descriptor 00:25:17.562 [2024-10-14 17:42:16.528092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85610 (9): Bad file descriptor 00:25:17.562 [2024-10-14 17:42:16.528120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290e90 (9): Bad file descriptor 00:25:17.562 [2024-10-14 17:42:16.528138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b7f0 (9): Bad file descriptor 00:25:17.562 [2024-10-14 17:42:16.528333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-10-14 17:42:16.528348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e70270 with addr=10.0.0.2, port=4420 00:25:17.563 [2024-10-14 17:42:16.528360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70270 is same with the state(6) to be set 00:25:17.563 [2024-10-14 17:42:16.528472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.563 [2024-10-14 17:42:16.528485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6da30 with addr=10.0.0.2, port=4420 00:25:17.563 [2024-10-14 17:42:16.528493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6da30 is same with the state(6) to be set 00:25:17.563 [2024-10-14 17:42:16.528502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2890 (9): Bad file descriptor 00:25:17.563 [2024-10-14 17:42:16.528512] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:17.563 [2024-10-14 17:42:16.528518] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:17.563 [2024-10-14 17:42:16.528526] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:17.563 [2024-10-14 17:42:16.528828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.528982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.528994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-10-14 17:42:16.529424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-10-14 17:42:16.529432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-10-14 17:42:16.529940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-10-14 17:42:16.529949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2394750 is same with the state(6) to be set 00:25:17.564 [2024-10-14 17:42:16.531317] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:17.564 [2024-10-14 17:42:16.531334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.564 [2024-10-14 17:42:16.531342] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:17.564 [2024-10-14 17:42:16.531367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e70270 (9): Bad file descriptor 00:25:17.564 [2024-10-14 17:42:16.531378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6da30 (9): Bad file descriptor 00:25:17.564 [2024-10-14 17:42:16.531387] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:17.564 [2024-10-14 17:42:16.531395] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:17.564 [2024-10-14 17:42:16.531403] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:17.564 [2024-10-14 17:42:16.531468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.564 [2024-10-14 17:42:16.531754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-10-14 17:42:16.531769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d4c30 with addr=10.0.0.2, port=4420 00:25:17.564 [2024-10-14 17:42:16.531778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4c30 is same with the state(6) to be set 00:25:17.564 [2024-10-14 17:42:16.531879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.564 [2024-10-14 17:42:16.531890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6fe10 with addr=10.0.0.2, port=4420 00:25:17.564 [2024-10-14 17:42:16.531898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6fe10 is same with the state(6) to be set 00:25:17.564 [2024-10-14 17:42:16.531906] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.564 [2024-10-14 17:42:16.531913] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.564 [2024-10-14 17:42:16.531924] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.564 [2024-10-14 17:42:16.531936] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:17.564 [2024-10-14 17:42:16.531943] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:17.564 [2024-10-14 17:42:16.531950] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:17.564 [2024-10-14 17:42:16.532236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.564 [2024-10-14 17:42:16.532246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.564 [2024-10-14 17:42:16.532255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4c30 (9): Bad file descriptor 00:25:17.565 [2024-10-14 17:42:16.532265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6fe10 (9): Bad file descriptor 00:25:17.565 [2024-10-14 17:42:16.532311] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:17.565 [2024-10-14 17:42:16.532320] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:17.565 [2024-10-14 17:42:16.532328] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:17.565 [2024-10-14 17:42:16.532339] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:17.565 [2024-10-14 17:42:16.532346] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:17.565 [2024-10-14 17:42:16.532353] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:17.565 [2024-10-14 17:42:16.532388] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:17.565 [2024-10-14 17:42:16.532398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.565 [2024-10-14 17:42:16.532404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.565 [2024-10-14 17:42:16.532578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-10-14 17:42:16.532592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6caa0 with addr=10.0.0.2, port=4420 00:25:17.565 [2024-10-14 17:42:16.532605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6caa0 is same with the state(6) to be set 00:25:17.565 [2024-10-14 17:42:16.532635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6caa0 (9): Bad file descriptor 00:25:17.565 [2024-10-14 17:42:16.532663] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:17.565 [2024-10-14 17:42:16.532671] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:17.565 [2024-10-14 17:42:16.532678] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:17.565 [2024-10-14 17:42:16.532708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.565 [2024-10-14 17:42:16.535416] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:17.565 [2024-10-14 17:42:16.535686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.565 [2024-10-14 17:42:16.535700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2890 with addr=10.0.0.2, port=4420 00:25:17.565 [2024-10-14 17:42:16.535707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2890 is same with the state(6) to be set 00:25:17.565 [2024-10-14 17:42:16.535732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2890 (9): Bad file descriptor 00:25:17.565 [2024-10-14 17:42:16.535756] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:17.565 [2024-10-14 17:42:16.535765] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:17.565 [2024-10-14 17:42:16.535772] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:17.565 [2024-10-14 17:42:16.535797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.565 [2024-10-14 17:42:16.537995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-10-14 17:42:16.538261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-10-14 17:42:16.538267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-10-14 17:42:16.538865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-10-14 17:42:16.538871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.538886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.538900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.538916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.538930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.538945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.538952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226e510 is same with the state(6) to be set 00:25:17.567 [2024-10-14 17:42:16.539935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.539947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.539958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.539966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.539974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.539981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.539990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.539996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-10-14 17:42:16.540451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-10-14 17:42:16.540459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.540987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.540995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.541002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.541011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.541017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.541025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.541031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.541045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.541053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.541060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.541067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2555cf0 is same with the state(6) to be set 00:25:17.568 [2024-10-14 17:42:16.542057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-10-14 17:42:16.542220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-10-14 17:42:16.542228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-10-14 17:42:16.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-10-14 17:42:16.542787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.542991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.542997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.543004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226e6f0 is same with the state(6) to be set 00:25:17.570 [2024-10-14 17:42:16.543981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.543993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.570 [2024-10-14 17:42:16.544229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.570 [2024-10-14 17:42:16.544235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.571 [2024-10-14 17:42:16.544841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.571 [2024-10-14 17:42:16.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.572 [2024-10-14 17:42:16.544937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.572 [2024-10-14 17:42:16.544944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226fb60 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.545907] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:17.572 [2024-10-14 17:42:16.545922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:17.572 [2024-10-14 17:42:16.545931] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:17.572 task offset: 25088 on job bdev=Nvme10n1 fails 00:25:17.572 00:25:17.572 Latency(us) 00:25:17.572 [2024-10-14T15:42:16.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme1n1 ended in about 0.94 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme1n1 : 0.94 204.89 12.81 68.30 0.00 231900.16 17101.78 220700.28 00:25:17.572 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme2n1 ended in about 0.94 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme2n1 : 0.94 208.15 13.01 67.97 0.00 225605.40 21221.18 196732.83 00:25:17.572 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme3n1 ended in about 0.94 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme3n1 : 0.94 211.06 13.19 68.22 0.00 219261.04 14043.43 224694.86 00:25:17.572 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme4n1 ended in about 0.93 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme4n1 : 0.93 277.84 17.36 64.36 0.00 175719.05 19099.06 198730.12 00:25:17.572 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme5n1 ended in about 0.95 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme5n1 : 0.95 201.98 12.62 67.33 0.00 219858.90 29709.65 207717.91 00:25:17.572 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme6n1 ended in about 0.95 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme6n1 : 0.95 201.54 12.60 67.18 0.00 216532.11 15354.15 217704.35 00:25:17.572 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme7n1 ended in about 0.95 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme7n1 : 0.95 201.13 12.57 67.04 0.00 213198.26 15291.73 214708.42 00:25:17.572 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme8n1 ended in about 0.96 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme8n1 : 0.96 204.90 12.81 66.91 0.00 206645.33 8176.40 208716.56 00:25:17.572 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme9n1 ended in about 0.93 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme9n1 : 0.93 206.39 12.90 68.80 0.00 199427.41 19723.22 218702.99 00:25:17.572 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.572 Job: Nvme10n1 ended in about 0.91 seconds with error 00:25:17.572 Verification LBA range: start 0x0 length 0x400 00:25:17.572 Nvme10n1 : 0.91 210.81 13.18 70.27 0.00 190792.90 16477.62 232684.01 00:25:17.572 [2024-10-14T15:42:16.710Z] =================================================================================================================== 00:25:17.572 [2024-10-14T15:42:16.710Z] Total : 2128.70 133.04 676.38 0.00 209117.81 8176.40 232684.01 00:25:17.572 [2024-10-14 17:42:16.576185] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:17.572 [2024-10-14 17:42:16.576238] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:17.572 [2024-10-14 17:42:16.576258] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:17.572 [2024-10-14 17:42:16.576271] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.572 [2024-10-14 17:42:16.576533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.576549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b7f0 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.576559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b7f0 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.576667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.576679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2290e90 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.576686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2290e90 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.576825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.576835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85610 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.576843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85610 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.577757] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:17.572 [2024-10-14 17:42:16.577773] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:17.572 [2024-10-14 17:42:16.577782] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:17.572 [2024-10-14 17:42:16.577790] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:17.572 [2024-10-14 17:42:16.577997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.578011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ca950 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.578020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ca950 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.578180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.578191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6da30 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.578198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6da30 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.578342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.578353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e70270 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.578360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70270 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.578371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b7f0 (9): Bad file descriptor 00:25:17.572 [2024-10-14 17:42:16.578383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290e90 (9): Bad file descriptor 00:25:17.572 [2024-10-14 17:42:16.578391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85610 (9): Bad file descriptor 00:25:17.572 [2024-10-14 17:42:16.578429] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.572 [2024-10-14 17:42:16.578440] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.572 [2024-10-14 17:42:16.578449] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.572 [2024-10-14 17:42:16.578771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.578789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6fe10 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.578797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6fe10 is same with the state(6) to be set 00:25:17.572 [2024-10-14 17:42:16.578879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.572 [2024-10-14 17:42:16.578889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d4c30 with addr=10.0.0.2, port=4420 00:25:17.572 [2024-10-14 17:42:16.578896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4c30 is same with the state(6) to be set 00:25:17.573 [2024-10-14 17:42:16.579028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.573 [2024-10-14 17:42:16.579039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6caa0 with addr=10.0.0.2, port=4420 00:25:17.573 [2024-10-14 17:42:16.579046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6caa0 is same with the state(6) to be set 00:25:17.573 [2024-10-14 17:42:16.579103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.573 [2024-10-14 17:42:16.579112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2890 with addr=10.0.0.2, port=4420 00:25:17.573 [2024-10-14 17:42:16.579122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2890 is same with the state(6) to be set 00:25:17.573 [2024-10-14 17:42:16.579131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ca950 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6da30 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e70270 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579157] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579171] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579189] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579196] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579205] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579211] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6fe10 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4c30 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6caa0 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2890 (9): Bad file descriptor 00:25:17.573 [2024-10-14 17:42:16.579329] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579334] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579341] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579349] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579354] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579360] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579369] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579375] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579381] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579430] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579437] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579445] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579451] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579457] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579465] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579471] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579485] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:17.573 [2024-10-14 17:42:16.579491] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:17.573 [2024-10-14 17:42:16.579497] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:17.573 [2024-10-14 17:42:16.579520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.573 [2024-10-14 17:42:16.579538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.833 17:42:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1171731 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1171731 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:25:18.770 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1171731 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.771 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.771 rmmod nvme_tcp 00:25:19.036 rmmod nvme_fabrics 00:25:19.036 rmmod nvme_keyring 00:25:19.036 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.036 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:19.036 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1171501 ']' 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1171501 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1171501 ']' 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1171501 00:25:19.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1171501) - No such process 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1171501 is not found' 00:25:19.037 Process with pid 1171501 is not found 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.037 17:42:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.949 00:25:20.949 real 0m7.327s 00:25:20.949 user 0m17.359s 00:25:20.949 sys 0m1.370s 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:20.949 ************************************ 00:25:20.949 END TEST nvmf_shutdown_tc3 00:25:20.949 ************************************ 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.949 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:21.210 ************************************ 00:25:21.210 START TEST nvmf_shutdown_tc4 00:25:21.210 ************************************ 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:21.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:21.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.210 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:21.211 Found net devices under 0000:86:00.0: cvl_0_0 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:21.211 Found net devices under 0000:86:00.1: cvl_0_1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.211 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:25:21.470 00:25:21.470 --- 10.0.0.2 ping statistics --- 00:25:21.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.470 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:21.470 00:25:21.470 --- 10.0.0.1 ping statistics --- 00:25:21.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.470 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1172834 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1172834 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1172834 ']' 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.470 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.470 [2024-10-14 17:42:20.460533] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:21.470 [2024-10-14 17:42:20.460578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.470 [2024-10-14 17:42:20.532106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.470 [2024-10-14 17:42:20.574123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.470 [2024-10-14 17:42:20.574159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.470 [2024-10-14 17:42:20.574166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.470 [2024-10-14 17:42:20.574172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.470 [2024-10-14 17:42:20.574177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.470 [2024-10-14 17:42:20.575757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.470 [2024-10-14 17:42:20.575863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.470 [2024-10-14 17:42:20.575972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.470 [2024-10-14 17:42:20.575973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.730 [2024-10-14 17:42:20.713208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.730 17:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:21.730 Malloc1 00:25:21.730 [2024-10-14 17:42:20.820639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.730 Malloc2 00:25:21.989 Malloc3 00:25:21.989 Malloc4 00:25:21.989 Malloc5 00:25:21.989 Malloc6 00:25:21.989 Malloc7 00:25:21.989 Malloc8 00:25:22.248 Malloc9 00:25:22.248 Malloc10 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1173099 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:22.248 17:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:22.248 [2024-10-14 17:42:21.329795] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1172834 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1172834 ']' 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1172834 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1172834 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1172834' 00:25:27.526 killing process with pid 1172834 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1172834 00:25:27.526 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1172834 00:25:27.526 [2024-10-14 17:42:26.330017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f11f0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.330936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f16c0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.331736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1b90 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.332311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0d20 is same with the state(6) to be set 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 [2024-10-14 17:42:26.337581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ec3f0 is same with the state(6) to be set 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 [2024-10-14 17:42:26.337615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ec3f0 is same with the state(6) to be set 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 [2024-10-14 17:42:26.337930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.526 starting I/O failed: -6 00:25:27.526 starting I/O failed: -6 00:25:27.526 [2024-10-14 17:42:26.338395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.338415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.338422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.338429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 starting I/O failed: -6 00:25:27.526 [2024-10-14 17:42:26.338435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 [2024-10-14 17:42:26.338442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461ba0 is same with the state(6) to be set 00:25:27.526 starting I/O failed: -6 00:25:27.526 starting I/O failed: -6 00:25:27.526 starting I/O failed: -6 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.526 starting I/O failed: -6 00:25:27.526 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 [2024-10-14 17:42:26.339699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 [2024-10-14 17:42:26.340736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.527 NVMe io qpair process completion error 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 [2024-10-14 17:42:26.341765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.527 starting I/O failed: -6 00:25:27.527 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 [2024-10-14 17:42:26.342633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 [2024-10-14 17:42:26.343593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 [2024-10-14 17:42:26.345345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.528 NVMe io qpair process completion error 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 Write completed with error (sct=0, sc=8) 00:25:27.528 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 [2024-10-14 17:42:26.346364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 [2024-10-14 17:42:26.347143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 [2024-10-14 17:42:26.348138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.529 Write completed with error (sct=0, sc=8) 00:25:27.529 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 [2024-10-14 17:42:26.349923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.530 NVMe io qpair process completion error 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 [2024-10-14 17:42:26.350981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.530 starting I/O failed: -6 00:25:27.530 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 [2024-10-14 17:42:26.351851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 [2024-10-14 17:42:26.352833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.531 Write completed with error (sct=0, sc=8) 00:25:27.531 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 [2024-10-14 17:42:26.354755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.532 NVMe io qpair process completion error 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 [2024-10-14 17:42:26.355686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 [2024-10-14 17:42:26.356554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.532 starting I/O failed: -6 00:25:27.532 starting I/O failed: -6 00:25:27.532 starting I/O failed: -6 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 [2024-10-14 17:42:26.357713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.532 starting I/O failed: -6 00:25:27.532 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 [2024-10-14 17:42:26.360334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.533 NVMe io qpair process completion error 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 [2024-10-14 17:42:26.361292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.533 starting I/O failed: -6 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 [2024-10-14 17:42:26.362189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 Write completed with error (sct=0, sc=8) 00:25:27.533 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 [2024-10-14 17:42:26.363207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.534 starting I/O failed: -6 00:25:27.534 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 [2024-10-14 17:42:26.368324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.535 NVMe io qpair process completion error 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 [2024-10-14 17:42:26.369377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 [2024-10-14 17:42:26.370236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.535 starting I/O failed: -6 00:25:27.535 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 [2024-10-14 17:42:26.371246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 [2024-10-14 17:42:26.373167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.536 NVMe io qpair process completion error 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.536 starting I/O failed: -6 00:25:27.536 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 [2024-10-14 17:42:26.374100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 [2024-10-14 17:42:26.374966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.537 starting I/O failed: -6 00:25:27.537 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 [2024-10-14 17:42:26.375973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 [2024-10-14 17:42:26.377829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.538 NVMe io qpair process completion error 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.538 starting I/O failed: -6 00:25:27.538 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 [2024-10-14 17:42:26.378847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 [2024-10-14 17:42:26.379751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 [2024-10-14 17:42:26.380735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.539 starting I/O failed: -6 00:25:27.539 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 [2024-10-14 17:42:26.385334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.540 NVMe io qpair process completion error 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 [2024-10-14 17:42:26.386333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 [2024-10-14 17:42:26.387192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.540 Write completed with error (sct=0, sc=8) 00:25:27.540 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 [2024-10-14 17:42:26.388223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 starting I/O failed: -6 00:25:27.541 [2024-10-14 17:42:26.390839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:27.541 NVMe io qpair process completion error 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.541 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Write completed with error (sct=0, sc=8) 00:25:27.542 Initializing NVMe Controllers 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:27.542 Controller IO queue size 128, less than required. 00:25:27.542 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:27.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:27.542 Initialization complete. Launching workers. 00:25:27.542 ======================================================== 00:25:27.542 Latency(us) 00:25:27.542 Device Information : IOPS MiB/s Average min max 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2171.10 93.29 59273.19 509.82 112791.79 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2238.30 96.18 57196.47 881.92 112283.22 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2246.43 96.53 57001.76 787.13 110269.36 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2232.16 95.91 57378.93 762.24 107478.01 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2237.43 96.14 57259.18 695.94 105295.01 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2242.26 96.35 57166.26 666.06 104962.92 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2210.41 94.98 58038.27 900.26 104022.75 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2223.15 95.53 57719.75 866.99 112163.93 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2153.53 92.53 59600.76 903.29 114335.76 00:25:27.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2206.68 94.82 58213.77 851.89 102983.14 00:25:27.542 ======================================================== 00:25:27.542 Total : 22161.45 952.25 57873.22 509.82 114335.76 00:25:27.542 00:25:27.542 [2024-10-14 17:42:26.395864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201abb0 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.395913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201a9d0 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.395943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018960 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.395971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018630 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.395999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018c90 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.396027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201a7f0 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.396054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018fc0 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.396082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f190 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.396114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f4c0 is same with the state(6) to be set 00:25:27.542 [2024-10-14 17:42:26.396142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ee60 is same with the state(6) to be set 00:25:27.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:27.801 17:42:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1173099 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1173099 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1173099 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.737 rmmod nvme_tcp 00:25:28.737 rmmod nvme_fabrics 00:25:28.737 rmmod nvme_keyring 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1172834 ']' 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1172834 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1172834 ']' 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1172834 00:25:28.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1172834) - No such process 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1172834 is not found' 00:25:28.737 Process with pid 1172834 is not found 00:25:28.737 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.738 17:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.277 00:25:31.277 real 0m9.768s 00:25:31.277 user 0m24.880s 00:25:31.277 sys 0m5.217s 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 END TEST nvmf_shutdown_tc4 00:25:31.277 ************************************ 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:31.277 00:25:31.277 real 0m40.057s 00:25:31.277 user 1m37.200s 00:25:31.277 sys 0m13.965s 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 END TEST nvmf_shutdown 00:25:31.277 ************************************ 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:31.277 00:25:31.277 real 11m42.660s 00:25:31.277 user 25m28.374s 00:25:31.277 sys 3m34.304s 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:31.277 17:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 END TEST nvmf_target_extra 00:25:31.277 ************************************ 00:25:31.277 17:42:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:31.277 17:42:29 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:31.277 17:42:29 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.277 17:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 START TEST nvmf_host 00:25:31.277 ************************************ 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:31.277 * Looking for test storage... 00:25:31.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:31.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.277 --rc genhtml_branch_coverage=1 00:25:31.277 --rc genhtml_function_coverage=1 00:25:31.277 --rc genhtml_legend=1 00:25:31.277 --rc geninfo_all_blocks=1 00:25:31.277 --rc geninfo_unexecuted_blocks=1 00:25:31.277 00:25:31.277 ' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 ************************************ 00:25:31.277 START TEST nvmf_multicontroller 00:25:31.277 ************************************ 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:31.277 * Looking for test storage... 00:25:31.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.277 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.278 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:31.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.543 --rc genhtml_branch_coverage=1 00:25:31.543 --rc genhtml_function_coverage=1 00:25:31.543 --rc genhtml_legend=1 00:25:31.543 --rc geninfo_all_blocks=1 00:25:31.543 --rc geninfo_unexecuted_blocks=1 00:25:31.543 00:25:31.543 ' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:31.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.543 --rc genhtml_branch_coverage=1 00:25:31.543 --rc genhtml_function_coverage=1 00:25:31.543 --rc genhtml_legend=1 00:25:31.543 --rc geninfo_all_blocks=1 00:25:31.543 --rc geninfo_unexecuted_blocks=1 00:25:31.543 00:25:31.543 ' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:31.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.543 --rc genhtml_branch_coverage=1 00:25:31.543 --rc genhtml_function_coverage=1 00:25:31.543 --rc genhtml_legend=1 00:25:31.543 --rc geninfo_all_blocks=1 00:25:31.543 --rc geninfo_unexecuted_blocks=1 00:25:31.543 00:25:31.543 ' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:31.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.543 --rc genhtml_branch_coverage=1 00:25:31.543 --rc genhtml_function_coverage=1 00:25:31.543 --rc genhtml_legend=1 00:25:31.543 --rc geninfo_all_blocks=1 00:25:31.543 --rc geninfo_unexecuted_blocks=1 00:25:31.543 00:25:31.543 ' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.543 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.544 17:42:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:38.114 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:38.114 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:38.114 Found net devices under 0000:86:00.0: cvl_0_0 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:38.114 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:38.115 Found net devices under 0000:86:00.1: cvl_0_1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:25:38.115 00:25:38.115 --- 10.0.0.2 ping statistics --- 00:25:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.115 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:25:38.115 00:25:38.115 --- 10.0.0.1 ping statistics --- 00:25:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.115 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1177623 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1177623 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1177623 ']' 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 [2024-10-14 17:42:36.463182] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:38.115 [2024-10-14 17:42:36.463224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.115 [2024-10-14 17:42:36.521230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:38.115 [2024-10-14 17:42:36.564654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.115 [2024-10-14 17:42:36.564687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.115 [2024-10-14 17:42:36.564695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.115 [2024-10-14 17:42:36.564701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.115 [2024-10-14 17:42:36.564706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.115 [2024-10-14 17:42:36.566075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.115 [2024-10-14 17:42:36.566178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.115 [2024-10-14 17:42:36.566179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 [2024-10-14 17:42:36.713985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 Malloc0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.115 [2024-10-14 17:42:36.772572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.115 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 [2024-10-14 17:42:36.780515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 Malloc1 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1177789 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1177789 /var/tmp/bdevperf.sock 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1177789 ']' 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.116 17:42:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.116 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:25:38.116 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:38.116 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.116 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.375 NVMe0n1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.375 1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.375 request: 00:25:38.375 { 00:25:38.375 "name": "NVMe0", 00:25:38.375 "trtype": "tcp", 00:25:38.375 "traddr": "10.0.0.2", 00:25:38.375 "adrfam": "ipv4", 00:25:38.375 "trsvcid": "4420", 00:25:38.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.375 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:38.375 "hostaddr": "10.0.0.1", 00:25:38.375 "prchk_reftag": false, 00:25:38.375 "prchk_guard": false, 00:25:38.375 "hdgst": false, 00:25:38.375 "ddgst": false, 00:25:38.375 "allow_unrecognized_csi": false, 00:25:38.375 "method": "bdev_nvme_attach_controller", 00:25:38.375 "req_id": 1 00:25:38.375 } 00:25:38.375 Got JSON-RPC error response 00:25:38.375 response: 00:25:38.375 { 00:25:38.375 "code": -114, 00:25:38.375 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.375 } 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.375 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.375 request: 00:25:38.375 { 00:25:38.375 "name": "NVMe0", 00:25:38.375 "trtype": "tcp", 00:25:38.375 "traddr": "10.0.0.2", 00:25:38.375 "adrfam": "ipv4", 00:25:38.375 "trsvcid": "4420", 00:25:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:38.376 "hostaddr": "10.0.0.1", 00:25:38.376 "prchk_reftag": false, 00:25:38.376 "prchk_guard": false, 00:25:38.376 "hdgst": false, 00:25:38.376 "ddgst": false, 00:25:38.376 "allow_unrecognized_csi": false, 00:25:38.376 "method": "bdev_nvme_attach_controller", 00:25:38.376 "req_id": 1 00:25:38.376 } 00:25:38.376 Got JSON-RPC error response 00:25:38.376 response: 00:25:38.376 { 00:25:38.376 "code": -114, 00:25:38.376 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.376 } 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.376 request: 00:25:38.376 { 00:25:38.376 "name": "NVMe0", 00:25:38.376 "trtype": "tcp", 00:25:38.376 "traddr": "10.0.0.2", 00:25:38.376 "adrfam": "ipv4", 00:25:38.376 "trsvcid": "4420", 00:25:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.376 "hostaddr": "10.0.0.1", 00:25:38.376 "prchk_reftag": false, 00:25:38.376 "prchk_guard": false, 00:25:38.376 "hdgst": false, 00:25:38.376 "ddgst": false, 00:25:38.376 "multipath": "disable", 00:25:38.376 "allow_unrecognized_csi": false, 00:25:38.376 "method": "bdev_nvme_attach_controller", 00:25:38.376 "req_id": 1 00:25:38.376 } 00:25:38.376 Got JSON-RPC error response 00:25:38.376 response: 00:25:38.376 { 00:25:38.376 "code": -114, 00:25:38.376 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:38.376 } 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.376 request: 00:25:38.376 { 00:25:38.376 "name": "NVMe0", 00:25:38.376 "trtype": "tcp", 00:25:38.376 "traddr": "10.0.0.2", 00:25:38.376 "adrfam": "ipv4", 00:25:38.376 "trsvcid": "4420", 00:25:38.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.376 "hostaddr": "10.0.0.1", 00:25:38.376 "prchk_reftag": false, 00:25:38.376 "prchk_guard": false, 00:25:38.376 "hdgst": false, 00:25:38.376 "ddgst": false, 00:25:38.376 "multipath": "failover", 00:25:38.376 "allow_unrecognized_csi": false, 00:25:38.376 "method": "bdev_nvme_attach_controller", 00:25:38.376 "req_id": 1 00:25:38.376 } 00:25:38.376 Got JSON-RPC error response 00:25:38.376 response: 00:25:38.376 { 00:25:38.376 "code": -114, 00:25:38.376 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:38.376 } 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.376 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.635 NVMe0n1 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.635 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:38.635 17:42:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:40.014 { 00:25:40.014 "results": [ 00:25:40.014 { 00:25:40.014 "job": "NVMe0n1", 00:25:40.014 "core_mask": "0x1", 00:25:40.014 "workload": "write", 00:25:40.014 "status": "finished", 00:25:40.014 "queue_depth": 128, 00:25:40.014 "io_size": 4096, 00:25:40.014 "runtime": 1.007849, 00:25:40.014 "iops": 25078.16151030561, 00:25:40.014 "mibps": 97.96156839963129, 00:25:40.014 "io_failed": 0, 00:25:40.014 "io_timeout": 0, 00:25:40.014 "avg_latency_us": 5097.355569497433, 00:25:40.014 "min_latency_us": 4805.973333333333, 00:25:40.014 "max_latency_us": 10673.005714285715 00:25:40.014 } 00:25:40.014 ], 00:25:40.014 "core_count": 1 00:25:40.014 } 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1177789 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1177789 ']' 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1177789 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1177789 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1177789' 00:25:40.014 killing process with pid 1177789 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1177789 00:25:40.014 17:42:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1177789 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:25:40.014 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:40.014 [2024-10-14 17:42:36.885499] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:40.014 [2024-10-14 17:42:36.885550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177789 ] 00:25:40.014 [2024-10-14 17:42:36.954763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.014 [2024-10-14 17:42:36.997185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.014 [2024-10-14 17:42:37.728620] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 5fda8f03-f23d-4e14-8a8d-16abd1a32c18 already exists 00:25:40.014 [2024-10-14 17:42:37.728649] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:5fda8f03-f23d-4e14-8a8d-16abd1a32c18 alias for bdev NVMe1n1 00:25:40.014 [2024-10-14 17:42:37.728657] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:40.014 Running I/O for 1 seconds... 00:25:40.014 25020.00 IOPS, 97.73 MiB/s 00:25:40.014 Latency(us) 00:25:40.014 [2024-10-14T15:42:39.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.014 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:40.014 NVMe0n1 : 1.01 25078.16 97.96 0.00 0.00 5097.36 4805.97 10673.01 00:25:40.014 [2024-10-14T15:42:39.152Z] =================================================================================================================== 00:25:40.014 [2024-10-14T15:42:39.152Z] Total : 25078.16 97.96 0.00 0.00 5097.36 4805.97 10673.01 00:25:40.014 Received shutdown signal, test time was about 1.000000 seconds 00:25:40.014 00:25:40.014 Latency(us) 00:25:40.014 [2024-10-14T15:42:39.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.014 [2024-10-14T15:42:39.152Z] =================================================================================================================== 00:25:40.014 [2024-10-14T15:42:39.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.014 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.014 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.014 rmmod nvme_tcp 00:25:40.274 rmmod nvme_fabrics 00:25:40.274 rmmod nvme_keyring 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1177623 ']' 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1177623 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1177623 ']' 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1177623 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1177623 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1177623' 00:25:40.274 killing process with pid 1177623 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1177623 00:25:40.274 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1177623 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.533 17:42:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.439 17:42:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.439 00:25:42.439 real 0m11.300s 00:25:42.439 user 0m12.773s 00:25:42.439 sys 0m5.188s 00:25:42.439 17:42:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.439 17:42:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:42.439 ************************************ 00:25:42.439 END TEST nvmf_multicontroller 00:25:42.439 ************************************ 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.699 ************************************ 00:25:42.699 START TEST nvmf_aer 00:25:42.699 ************************************ 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:42.699 * Looking for test storage... 00:25:42.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:42.699 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:42.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.700 --rc genhtml_branch_coverage=1 00:25:42.700 --rc genhtml_function_coverage=1 00:25:42.700 --rc genhtml_legend=1 00:25:42.700 --rc geninfo_all_blocks=1 00:25:42.700 --rc geninfo_unexecuted_blocks=1 00:25:42.700 00:25:42.700 ' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:42.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.700 --rc genhtml_branch_coverage=1 00:25:42.700 --rc genhtml_function_coverage=1 00:25:42.700 --rc genhtml_legend=1 00:25:42.700 --rc geninfo_all_blocks=1 00:25:42.700 --rc geninfo_unexecuted_blocks=1 00:25:42.700 00:25:42.700 ' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:42.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.700 --rc genhtml_branch_coverage=1 00:25:42.700 --rc genhtml_function_coverage=1 00:25:42.700 --rc genhtml_legend=1 00:25:42.700 --rc geninfo_all_blocks=1 00:25:42.700 --rc geninfo_unexecuted_blocks=1 00:25:42.700 00:25:42.700 ' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:42.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.700 --rc genhtml_branch_coverage=1 00:25:42.700 --rc genhtml_function_coverage=1 00:25:42.700 --rc genhtml_legend=1 00:25:42.700 --rc geninfo_all_blocks=1 00:25:42.700 --rc geninfo_unexecuted_blocks=1 00:25:42.700 00:25:42.700 ' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.700 17:42:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.271 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.271 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.271 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.271 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:49.272 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:49.272 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:49.272 Found net devices under 0000:86:00.0: cvl_0_0 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:49.272 Found net devices under 0000:86:00.1: cvl_0_1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:25:49.272 00:25:49.272 --- 10.0.0.2 ping statistics --- 00:25:49.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.272 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:25:49.272 00:25:49.272 --- 10.0.0.1 ping statistics --- 00:25:49.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.272 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1181638 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1181638 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1181638 ']' 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.272 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.273 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.273 17:42:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 [2024-10-14 17:42:47.844149] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:49.273 [2024-10-14 17:42:47.844192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.273 [2024-10-14 17:42:47.916386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.273 [2024-10-14 17:42:47.958871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.273 [2024-10-14 17:42:47.958907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.273 [2024-10-14 17:42:47.958914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.273 [2024-10-14 17:42:47.958920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.273 [2024-10-14 17:42:47.958925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.273 [2024-10-14 17:42:47.960471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.273 [2024-10-14 17:42:47.960583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.273 [2024-10-14 17:42:47.960706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.273 [2024-10-14 17:42:47.960707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 [2024-10-14 17:42:48.096588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 Malloc0 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 [2024-10-14 17:42:48.156936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.273 [ 00:25:49.273 { 00:25:49.273 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:49.273 "subtype": "Discovery", 00:25:49.273 "listen_addresses": [], 00:25:49.273 "allow_any_host": true, 00:25:49.273 "hosts": [] 00:25:49.273 }, 00:25:49.273 { 00:25:49.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.273 "subtype": "NVMe", 00:25:49.273 "listen_addresses": [ 00:25:49.273 { 00:25:49.273 "trtype": "TCP", 00:25:49.273 "adrfam": "IPv4", 00:25:49.273 "traddr": "10.0.0.2", 00:25:49.273 "trsvcid": "4420" 00:25:49.273 } 00:25:49.273 ], 00:25:49.273 "allow_any_host": true, 00:25:49.273 "hosts": [], 00:25:49.273 "serial_number": "SPDK00000000000001", 00:25:49.273 "model_number": "SPDK bdev Controller", 00:25:49.273 "max_namespaces": 2, 00:25:49.273 "min_cntlid": 1, 00:25:49.273 "max_cntlid": 65519, 00:25:49.273 "namespaces": [ 00:25:49.273 { 00:25:49.273 "nsid": 1, 00:25:49.273 "bdev_name": "Malloc0", 00:25:49.273 "name": "Malloc0", 00:25:49.273 "nguid": "E0201851265E46F28C9C313BEBDBAF07", 00:25:49.273 "uuid": "e0201851-265e-46f2-8c9c-313bebdbaf07" 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 } 00:25:49.273 ] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1181717 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:25:49.273 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:25:49.532 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 Malloc1 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 Asynchronous Event Request test 00:25:49.533 Attaching to 10.0.0.2 00:25:49.533 Attached to 10.0.0.2 00:25:49.533 Registering asynchronous event callbacks... 00:25:49.533 Starting namespace attribute notice tests for all controllers... 00:25:49.533 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:49.533 aer_cb - Changed Namespace 00:25:49.533 Cleaning up... 00:25:49.533 [ 00:25:49.533 { 00:25:49.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:49.533 "subtype": "Discovery", 00:25:49.533 "listen_addresses": [], 00:25:49.533 "allow_any_host": true, 00:25:49.533 "hosts": [] 00:25:49.533 }, 00:25:49.533 { 00:25:49.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.533 "subtype": "NVMe", 00:25:49.533 "listen_addresses": [ 00:25:49.533 { 00:25:49.533 "trtype": "TCP", 00:25:49.533 "adrfam": "IPv4", 00:25:49.533 "traddr": "10.0.0.2", 00:25:49.533 "trsvcid": "4420" 00:25:49.533 } 00:25:49.533 ], 00:25:49.533 "allow_any_host": true, 00:25:49.533 "hosts": [], 00:25:49.533 "serial_number": "SPDK00000000000001", 00:25:49.533 "model_number": "SPDK bdev Controller", 00:25:49.533 "max_namespaces": 2, 00:25:49.533 "min_cntlid": 1, 00:25:49.533 "max_cntlid": 65519, 00:25:49.533 "namespaces": [ 00:25:49.533 { 00:25:49.533 "nsid": 1, 00:25:49.533 "bdev_name": "Malloc0", 00:25:49.533 "name": "Malloc0", 00:25:49.533 "nguid": "E0201851265E46F28C9C313BEBDBAF07", 00:25:49.533 "uuid": "e0201851-265e-46f2-8c9c-313bebdbaf07" 00:25:49.533 }, 00:25:49.533 { 00:25:49.533 "nsid": 2, 00:25:49.533 "bdev_name": "Malloc1", 00:25:49.533 "name": "Malloc1", 00:25:49.533 "nguid": "70620CF52350490097DBD75BBDD00E07", 00:25:49.533 "uuid": "70620cf5-2350-4900-97db-d75bbdd00e07" 00:25:49.533 } 00:25:49.533 ] 00:25:49.533 } 00:25:49.533 ] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1181717 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.533 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.533 rmmod nvme_tcp 00:25:49.533 rmmod nvme_fabrics 00:25:49.533 rmmod nvme_keyring 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1181638 ']' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1181638 ']' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1181638' 00:25:49.793 killing process with pid 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1181638 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.793 17:42:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.400 17:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.400 00:25:52.400 real 0m9.346s 00:25:52.400 user 0m5.482s 00:25:52.400 sys 0m4.879s 00:25:52.400 17:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:52.400 17:42:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:52.400 ************************************ 00:25:52.400 END TEST nvmf_aer 00:25:52.400 ************************************ 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.400 ************************************ 00:25:52.400 START TEST nvmf_async_init 00:25:52.400 ************************************ 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:52.400 * Looking for test storage... 00:25:52.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.400 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.401 --rc genhtml_branch_coverage=1 00:25:52.401 --rc genhtml_function_coverage=1 00:25:52.401 --rc genhtml_legend=1 00:25:52.401 --rc geninfo_all_blocks=1 00:25:52.401 --rc geninfo_unexecuted_blocks=1 00:25:52.401 00:25:52.401 ' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.401 --rc genhtml_branch_coverage=1 00:25:52.401 --rc genhtml_function_coverage=1 00:25:52.401 --rc genhtml_legend=1 00:25:52.401 --rc geninfo_all_blocks=1 00:25:52.401 --rc geninfo_unexecuted_blocks=1 00:25:52.401 00:25:52.401 ' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.401 --rc genhtml_branch_coverage=1 00:25:52.401 --rc genhtml_function_coverage=1 00:25:52.401 --rc genhtml_legend=1 00:25:52.401 --rc geninfo_all_blocks=1 00:25:52.401 --rc geninfo_unexecuted_blocks=1 00:25:52.401 00:25:52.401 ' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.401 --rc genhtml_branch_coverage=1 00:25:52.401 --rc genhtml_function_coverage=1 00:25:52.401 --rc genhtml_legend=1 00:25:52.401 --rc geninfo_all_blocks=1 00:25:52.401 --rc geninfo_unexecuted_blocks=1 00:25:52.401 00:25:52.401 ' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0361b3b182b04066841d596bba2dde59 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.401 17:42:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.995 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.996 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.996 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.996 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.996 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.996 17:42:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:25:58.996 00:25:58.996 --- 10.0.0.2 ping statistics --- 00:25:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.996 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:25:58.996 00:25:58.996 --- 10.0.0.1 ping statistics --- 00:25:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.996 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.996 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1185411 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1185411 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1185411 ']' 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 [2024-10-14 17:42:57.260624] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:25:58.997 [2024-10-14 17:42:57.260667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.997 [2024-10-14 17:42:57.333092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.997 [2024-10-14 17:42:57.374301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.997 [2024-10-14 17:42:57.374336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.997 [2024-10-14 17:42:57.374344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.997 [2024-10-14 17:42:57.374350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.997 [2024-10-14 17:42:57.374355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.997 [2024-10-14 17:42:57.374888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 [2024-10-14 17:42:57.508793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 null0 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0361b3b182b04066841d596bba2dde59 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 [2024-10-14 17:42:57.557037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 nvme0n1 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 [ 00:25:58.997 { 00:25:58.997 "name": "nvme0n1", 00:25:58.997 "aliases": [ 00:25:58.997 "0361b3b1-82b0-4066-841d-596bba2dde59" 00:25:58.997 ], 00:25:58.997 "product_name": "NVMe disk", 00:25:58.997 "block_size": 512, 00:25:58.997 "num_blocks": 2097152, 00:25:58.997 "uuid": "0361b3b1-82b0-4066-841d-596bba2dde59", 00:25:58.997 "numa_id": 1, 00:25:58.997 "assigned_rate_limits": { 00:25:58.997 "rw_ios_per_sec": 0, 00:25:58.997 "rw_mbytes_per_sec": 0, 00:25:58.997 "r_mbytes_per_sec": 0, 00:25:58.997 "w_mbytes_per_sec": 0 00:25:58.997 }, 00:25:58.997 "claimed": false, 00:25:58.997 "zoned": false, 00:25:58.997 "supported_io_types": { 00:25:58.997 "read": true, 00:25:58.997 "write": true, 00:25:58.997 "unmap": false, 00:25:58.997 "flush": true, 00:25:58.997 "reset": true, 00:25:58.997 "nvme_admin": true, 00:25:58.997 "nvme_io": true, 00:25:58.997 "nvme_io_md": false, 00:25:58.997 "write_zeroes": true, 00:25:58.997 "zcopy": false, 00:25:58.997 "get_zone_info": false, 00:25:58.997 "zone_management": false, 00:25:58.997 "zone_append": false, 00:25:58.997 "compare": true, 00:25:58.997 "compare_and_write": true, 00:25:58.997 "abort": true, 00:25:58.997 "seek_hole": false, 00:25:58.997 "seek_data": false, 00:25:58.997 "copy": true, 00:25:58.997 "nvme_iov_md": false 00:25:58.997 }, 00:25:58.997 "memory_domains": [ 00:25:58.997 { 00:25:58.997 "dma_device_id": "system", 00:25:58.997 "dma_device_type": 1 00:25:58.997 } 00:25:58.997 ], 00:25:58.997 "driver_specific": { 00:25:58.997 "nvme": [ 00:25:58.997 { 00:25:58.997 "trid": { 00:25:58.997 "trtype": "TCP", 00:25:58.997 "adrfam": "IPv4", 00:25:58.997 "traddr": "10.0.0.2", 00:25:58.997 "trsvcid": "4420", 00:25:58.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:58.997 }, 00:25:58.997 "ctrlr_data": { 00:25:58.997 "cntlid": 1, 00:25:58.997 "vendor_id": "0x8086", 00:25:58.997 "model_number": "SPDK bdev Controller", 00:25:58.997 "serial_number": "00000000000000000000", 00:25:58.997 "firmware_revision": "25.01", 00:25:58.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.997 "oacs": { 00:25:58.997 "security": 0, 00:25:58.997 "format": 0, 00:25:58.997 "firmware": 0, 00:25:58.997 "ns_manage": 0 00:25:58.997 }, 00:25:58.997 "multi_ctrlr": true, 00:25:58.997 "ana_reporting": false 00:25:58.997 }, 00:25:58.997 "vs": { 00:25:58.997 "nvme_version": "1.3" 00:25:58.997 }, 00:25:58.997 "ns_data": { 00:25:58.997 "id": 1, 00:25:58.997 "can_share": true 00:25:58.997 } 00:25:58.997 } 00:25:58.997 ], 00:25:58.997 "mp_policy": "active_passive" 00:25:58.997 } 00:25:58.997 } 00:25:58.997 ] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.997 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.997 [2024-10-14 17:42:57.817554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:58.997 [2024-10-14 17:42:57.817612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2494060 (9): Bad file descriptor 00:25:58.997 [2024-10-14 17:42:57.949678] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 [ 00:25:58.998 { 00:25:58.998 "name": "nvme0n1", 00:25:58.998 "aliases": [ 00:25:58.998 "0361b3b1-82b0-4066-841d-596bba2dde59" 00:25:58.998 ], 00:25:58.998 "product_name": "NVMe disk", 00:25:58.998 "block_size": 512, 00:25:58.998 "num_blocks": 2097152, 00:25:58.998 "uuid": "0361b3b1-82b0-4066-841d-596bba2dde59", 00:25:58.998 "numa_id": 1, 00:25:58.998 "assigned_rate_limits": { 00:25:58.998 "rw_ios_per_sec": 0, 00:25:58.998 "rw_mbytes_per_sec": 0, 00:25:58.998 "r_mbytes_per_sec": 0, 00:25:58.998 "w_mbytes_per_sec": 0 00:25:58.998 }, 00:25:58.998 "claimed": false, 00:25:58.998 "zoned": false, 00:25:58.998 "supported_io_types": { 00:25:58.998 "read": true, 00:25:58.998 "write": true, 00:25:58.998 "unmap": false, 00:25:58.998 "flush": true, 00:25:58.998 "reset": true, 00:25:58.998 "nvme_admin": true, 00:25:58.998 "nvme_io": true, 00:25:58.998 "nvme_io_md": false, 00:25:58.998 "write_zeroes": true, 00:25:58.998 "zcopy": false, 00:25:58.998 "get_zone_info": false, 00:25:58.998 "zone_management": false, 00:25:58.998 "zone_append": false, 00:25:58.998 "compare": true, 00:25:58.998 "compare_and_write": true, 00:25:58.998 "abort": true, 00:25:58.998 "seek_hole": false, 00:25:58.998 "seek_data": false, 00:25:58.998 "copy": true, 00:25:58.998 "nvme_iov_md": false 00:25:58.998 }, 00:25:58.998 "memory_domains": [ 00:25:58.998 { 00:25:58.998 "dma_device_id": "system", 00:25:58.998 "dma_device_type": 1 00:25:58.998 } 00:25:58.998 ], 00:25:58.998 "driver_specific": { 00:25:58.998 "nvme": [ 00:25:58.998 { 00:25:58.998 "trid": { 00:25:58.998 "trtype": "TCP", 00:25:58.998 "adrfam": "IPv4", 00:25:58.998 "traddr": "10.0.0.2", 00:25:58.998 "trsvcid": "4420", 00:25:58.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:58.998 }, 00:25:58.998 "ctrlr_data": { 00:25:58.998 "cntlid": 2, 00:25:58.998 "vendor_id": "0x8086", 00:25:58.998 "model_number": "SPDK bdev Controller", 00:25:58.998 "serial_number": "00000000000000000000", 00:25:58.998 "firmware_revision": "25.01", 00:25:58.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.998 "oacs": { 00:25:58.998 "security": 0, 00:25:58.998 "format": 0, 00:25:58.998 "firmware": 0, 00:25:58.998 "ns_manage": 0 00:25:58.998 }, 00:25:58.998 "multi_ctrlr": true, 00:25:58.998 "ana_reporting": false 00:25:58.998 }, 00:25:58.998 "vs": { 00:25:58.998 "nvme_version": "1.3" 00:25:58.998 }, 00:25:58.998 "ns_data": { 00:25:58.998 "id": 1, 00:25:58.998 "can_share": true 00:25:58.998 } 00:25:58.998 } 00:25:58.998 ], 00:25:58.998 "mp_policy": "active_passive" 00:25:58.998 } 00:25:58.998 } 00:25:58.998 ] 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3lSxxNYglY 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3lSxxNYglY 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.3lSxxNYglY 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 [2024-10-14 17:42:58.018159] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:58.998 [2024-10-14 17:42:58.018249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 [2024-10-14 17:42:58.042230] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:58.998 nvme0n1 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.998 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:58.998 [ 00:25:58.998 { 00:25:58.998 "name": "nvme0n1", 00:25:58.998 "aliases": [ 00:25:58.998 "0361b3b1-82b0-4066-841d-596bba2dde59" 00:25:58.998 ], 00:25:58.998 "product_name": "NVMe disk", 00:25:58.998 "block_size": 512, 00:25:58.998 "num_blocks": 2097152, 00:25:58.998 "uuid": "0361b3b1-82b0-4066-841d-596bba2dde59", 00:25:58.998 "numa_id": 1, 00:25:58.998 "assigned_rate_limits": { 00:25:58.998 "rw_ios_per_sec": 0, 00:25:58.998 "rw_mbytes_per_sec": 0, 00:25:58.998 "r_mbytes_per_sec": 0, 00:25:58.998 "w_mbytes_per_sec": 0 00:25:58.998 }, 00:25:58.998 "claimed": false, 00:25:58.998 "zoned": false, 00:25:58.998 "supported_io_types": { 00:25:58.998 "read": true, 00:25:58.998 "write": true, 00:25:58.998 "unmap": false, 00:25:58.998 "flush": true, 00:25:58.998 "reset": true, 00:25:58.998 "nvme_admin": true, 00:25:58.998 "nvme_io": true, 00:25:58.998 "nvme_io_md": false, 00:25:58.998 "write_zeroes": true, 00:25:58.998 "zcopy": false, 00:25:58.998 "get_zone_info": false, 00:25:58.998 "zone_management": false, 00:25:58.998 "zone_append": false, 00:25:58.998 "compare": true, 00:25:58.998 "compare_and_write": true, 00:25:58.998 "abort": true, 00:25:58.998 "seek_hole": false, 00:25:58.998 "seek_data": false, 00:25:58.998 "copy": true, 00:25:58.998 "nvme_iov_md": false 00:25:58.998 }, 00:25:58.998 "memory_domains": [ 00:25:58.998 { 00:25:58.998 "dma_device_id": "system", 00:25:58.998 "dma_device_type": 1 00:25:58.998 } 00:25:58.998 ], 00:25:58.998 "driver_specific": { 00:25:58.998 "nvme": [ 00:25:58.998 { 00:25:58.998 "trid": { 00:25:58.998 "trtype": "TCP", 00:25:58.998 "adrfam": "IPv4", 00:25:58.998 "traddr": "10.0.0.2", 00:25:58.998 "trsvcid": "4421", 00:25:58.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:58.998 }, 00:25:58.998 "ctrlr_data": { 00:25:58.998 "cntlid": 3, 00:25:58.998 "vendor_id": "0x8086", 00:25:58.998 "model_number": "SPDK bdev Controller", 00:25:58.998 "serial_number": "00000000000000000000", 00:25:58.998 "firmware_revision": "25.01", 00:25:58.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.998 "oacs": { 00:25:58.998 "security": 0, 00:25:58.998 "format": 0, 00:25:58.998 "firmware": 0, 00:25:58.998 "ns_manage": 0 00:25:58.998 }, 00:25:58.998 "multi_ctrlr": true, 00:25:58.998 "ana_reporting": false 00:25:58.998 }, 00:25:58.998 "vs": { 00:25:58.998 "nvme_version": "1.3" 00:25:58.998 }, 00:25:58.998 "ns_data": { 00:25:58.998 "id": 1, 00:25:58.998 "can_share": true 00:25:58.998 } 00:25:58.998 } 00:25:58.998 ], 00:25:58.999 "mp_policy": "active_passive" 00:25:58.999 } 00:25:58.999 } 00:25:58.999 ] 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.3lSxxNYglY 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.258 rmmod nvme_tcp 00:25:59.258 rmmod nvme_fabrics 00:25:59.258 rmmod nvme_keyring 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1185411 ']' 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1185411 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1185411 ']' 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1185411 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1185411 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1185411' 00:25:59.258 killing process with pid 1185411 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1185411 00:25:59.258 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1185411 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.516 17:42:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.421 00:26:01.421 real 0m9.439s 00:26:01.421 user 0m3.058s 00:26:01.421 sys 0m4.786s 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:01.421 ************************************ 00:26:01.421 END TEST nvmf_async_init 00:26:01.421 ************************************ 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:01.421 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.681 ************************************ 00:26:01.681 START TEST dma 00:26:01.681 ************************************ 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:01.681 * Looking for test storage... 00:26:01.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.681 --rc genhtml_branch_coverage=1 00:26:01.681 --rc genhtml_function_coverage=1 00:26:01.681 --rc genhtml_legend=1 00:26:01.681 --rc geninfo_all_blocks=1 00:26:01.681 --rc geninfo_unexecuted_blocks=1 00:26:01.681 00:26:01.681 ' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.681 --rc genhtml_branch_coverage=1 00:26:01.681 --rc genhtml_function_coverage=1 00:26:01.681 --rc genhtml_legend=1 00:26:01.681 --rc geninfo_all_blocks=1 00:26:01.681 --rc geninfo_unexecuted_blocks=1 00:26:01.681 00:26:01.681 ' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.681 --rc genhtml_branch_coverage=1 00:26:01.681 --rc genhtml_function_coverage=1 00:26:01.681 --rc genhtml_legend=1 00:26:01.681 --rc geninfo_all_blocks=1 00:26:01.681 --rc geninfo_unexecuted_blocks=1 00:26:01.681 00:26:01.681 ' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:01.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.681 --rc genhtml_branch_coverage=1 00:26:01.681 --rc genhtml_function_coverage=1 00:26:01.681 --rc genhtml_legend=1 00:26:01.681 --rc geninfo_all_blocks=1 00:26:01.681 --rc geninfo_unexecuted_blocks=1 00:26:01.681 00:26:01.681 ' 00:26:01.681 17:43:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:01.682 00:26:01.682 real 0m0.206s 00:26:01.682 user 0m0.124s 00:26:01.682 sys 0m0.096s 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:01.682 ************************************ 00:26:01.682 END TEST dma 00:26:01.682 ************************************ 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:01.682 17:43:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.941 ************************************ 00:26:01.941 START TEST nvmf_identify 00:26:01.941 ************************************ 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:01.941 * Looking for test storage... 00:26:01.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.941 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:01.942 17:43:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.942 --rc genhtml_branch_coverage=1 00:26:01.942 --rc genhtml_function_coverage=1 00:26:01.942 --rc genhtml_legend=1 00:26:01.942 --rc geninfo_all_blocks=1 00:26:01.942 --rc geninfo_unexecuted_blocks=1 00:26:01.942 00:26:01.942 ' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.942 --rc genhtml_branch_coverage=1 00:26:01.942 --rc genhtml_function_coverage=1 00:26:01.942 --rc genhtml_legend=1 00:26:01.942 --rc geninfo_all_blocks=1 00:26:01.942 --rc geninfo_unexecuted_blocks=1 00:26:01.942 00:26:01.942 ' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.942 --rc genhtml_branch_coverage=1 00:26:01.942 --rc genhtml_function_coverage=1 00:26:01.942 --rc genhtml_legend=1 00:26:01.942 --rc geninfo_all_blocks=1 00:26:01.942 --rc geninfo_unexecuted_blocks=1 00:26:01.942 00:26:01.942 ' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:01.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.942 --rc genhtml_branch_coverage=1 00:26:01.942 --rc genhtml_function_coverage=1 00:26:01.942 --rc genhtml_legend=1 00:26:01.942 --rc geninfo_all_blocks=1 00:26:01.942 --rc geninfo_unexecuted_blocks=1 00:26:01.942 00:26:01.942 ' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.942 17:43:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:08.534 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:08.534 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:08.534 Found net devices under 0000:86:00.0: cvl_0_0 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:08.534 Found net devices under 0000:86:00.1: cvl_0_1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.534 17:43:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:26:08.534 00:26:08.534 --- 10.0.0.2 ping statistics --- 00:26:08.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.534 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:26:08.534 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:26:08.534 00:26:08.534 --- 10.0.0.1 ping statistics --- 00:26:08.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.534 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:08.534 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.534 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1189145 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1189145 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1189145 ']' 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 [2024-10-14 17:43:07.107542] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:08.535 [2024-10-14 17:43:07.107587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.535 [2024-10-14 17:43:07.181361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.535 [2024-10-14 17:43:07.225906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.535 [2024-10-14 17:43:07.225946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.535 [2024-10-14 17:43:07.225954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.535 [2024-10-14 17:43:07.225960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.535 [2024-10-14 17:43:07.225965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.535 [2024-10-14 17:43:07.227529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.535 [2024-10-14 17:43:07.227660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.535 [2024-10-14 17:43:07.227692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.535 [2024-10-14 17:43:07.227694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 [2024-10-14 17:43:07.336871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 Malloc0 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 [2024-10-14 17:43:07.444879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 [ 00:26:08.535 { 00:26:08.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:08.535 "subtype": "Discovery", 00:26:08.535 "listen_addresses": [ 00:26:08.535 { 00:26:08.535 "trtype": "TCP", 00:26:08.535 "adrfam": "IPv4", 00:26:08.535 "traddr": "10.0.0.2", 00:26:08.535 "trsvcid": "4420" 00:26:08.535 } 00:26:08.535 ], 00:26:08.535 "allow_any_host": true, 00:26:08.535 "hosts": [] 00:26:08.535 }, 00:26:08.535 { 00:26:08.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.535 "subtype": "NVMe", 00:26:08.535 "listen_addresses": [ 00:26:08.535 { 00:26:08.535 "trtype": "TCP", 00:26:08.535 "adrfam": "IPv4", 00:26:08.535 "traddr": "10.0.0.2", 00:26:08.535 "trsvcid": "4420" 00:26:08.535 } 00:26:08.535 ], 00:26:08.535 "allow_any_host": true, 00:26:08.535 "hosts": [], 00:26:08.535 "serial_number": "SPDK00000000000001", 00:26:08.535 "model_number": "SPDK bdev Controller", 00:26:08.535 "max_namespaces": 32, 00:26:08.535 "min_cntlid": 1, 00:26:08.535 "max_cntlid": 65519, 00:26:08.535 "namespaces": [ 00:26:08.535 { 00:26:08.535 "nsid": 1, 00:26:08.535 "bdev_name": "Malloc0", 00:26:08.535 "name": "Malloc0", 00:26:08.535 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:08.535 "eui64": "ABCDEF0123456789", 00:26:08.535 "uuid": "056d7222-cdf2-4770-bbf8-73fe7d0b54bd" 00:26:08.535 } 00:26:08.535 ] 00:26:08.535 } 00:26:08.535 ] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.535 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:08.535 [2024-10-14 17:43:07.497974] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:08.535 [2024-10-14 17:43:07.498007] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189262 ] 00:26:08.535 [2024-10-14 17:43:07.526949] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:08.535 [2024-10-14 17:43:07.526991] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:08.535 [2024-10-14 17:43:07.526996] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:08.535 [2024-10-14 17:43:07.527007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:08.535 [2024-10-14 17:43:07.527015] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:08.535 [2024-10-14 17:43:07.527598] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:08.535 [2024-10-14 17:43:07.527634] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12ac760 0 00:26:08.535 [2024-10-14 17:43:07.541607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:08.535 [2024-10-14 17:43:07.541623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:08.535 [2024-10-14 17:43:07.541628] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:08.535 [2024-10-14 17:43:07.541631] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:08.535 [2024-10-14 17:43:07.541660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.535 [2024-10-14 17:43:07.541666] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.535 [2024-10-14 17:43:07.541669] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.535 [2024-10-14 17:43:07.541682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:08.535 [2024-10-14 17:43:07.541702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.535 [2024-10-14 17:43:07.548782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.535 [2024-10-14 17:43:07.548791] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.535 [2024-10-14 17:43:07.548794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.535 [2024-10-14 17:43:07.548798] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.535 [2024-10-14 17:43:07.548809] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:08.535 [2024-10-14 17:43:07.548815] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:08.535 [2024-10-14 17:43:07.548819] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:08.535 [2024-10-14 17:43:07.548831] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.535 [2024-10-14 17:43:07.548834] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.548838] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.548845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.548858] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.548996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549008] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549013] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:08.536 [2024-10-14 17:43:07.549019] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:08.536 [2024-10-14 17:43:07.549026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549032] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549048] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549115] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549132] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:08.536 [2024-10-14 17:43:07.549138] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549144] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549148] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549165] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549231] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549237] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549242] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549256] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549341] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549347] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549351] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:08.536 [2024-10-14 17:43:07.549355] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549362] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549466] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:08.536 [2024-10-14 17:43:07.549471] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549478] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549499] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549567] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549570] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549577] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:08.536 [2024-10-14 17:43:07.549585] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549592] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549691] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:08.536 [2024-10-14 17:43:07.549695] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:08.536 [2024-10-14 17:43:07.549702] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:08.536 [2024-10-14 17:43:07.549709] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:08.536 [2024-10-14 17:43:07.549716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549720] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.536 [2024-10-14 17:43:07.549736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.536 [2024-10-14 17:43:07.549827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.536 [2024-10-14 17:43:07.549832] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.536 [2024-10-14 17:43:07.549836] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ac760): datao=0, datal=4096, cccid=0 00:26:08.536 [2024-10-14 17:43:07.549843] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x130c480) on tqpair(0x12ac760): expected_datao=0, payload_size=4096 00:26:08.536 [2024-10-14 17:43:07.549847] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549859] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549863] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549899] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.536 [2024-10-14 17:43:07.549904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.536 [2024-10-14 17:43:07.549907] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.536 [2024-10-14 17:43:07.549917] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:08.536 [2024-10-14 17:43:07.549921] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:08.536 [2024-10-14 17:43:07.549925] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:08.536 [2024-10-14 17:43:07.549929] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:08.536 [2024-10-14 17:43:07.549933] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:08.536 [2024-10-14 17:43:07.549937] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:08.536 [2024-10-14 17:43:07.549946] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:08.536 [2024-10-14 17:43:07.549954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.536 [2024-10-14 17:43:07.549961] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.536 [2024-10-14 17:43:07.549967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.536 [2024-10-14 17:43:07.549977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.537 [2024-10-14 17:43:07.550043] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.550048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.550051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550055] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.550060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.537 [2024-10-14 17:43:07.550077] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550083] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.537 [2024-10-14 17:43:07.550093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550096] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550099] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.537 [2024-10-14 17:43:07.550109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.537 [2024-10-14 17:43:07.550124] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:08.537 [2024-10-14 17:43:07.550134] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:08.537 [2024-10-14 17:43:07.550140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.537 [2024-10-14 17:43:07.550159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c480, cid 0, qid 0 00:26:08.537 [2024-10-14 17:43:07.550164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c600, cid 1, qid 0 00:26:08.537 [2024-10-14 17:43:07.550168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c780, cid 2, qid 0 00:26:08.537 [2024-10-14 17:43:07.550174] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.537 [2024-10-14 17:43:07.550178] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130ca80, cid 4, qid 0 00:26:08.537 [2024-10-14 17:43:07.550270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.550276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.550279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130ca80) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.550286] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:08.537 [2024-10-14 17:43:07.550291] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:08.537 [2024-10-14 17:43:07.550300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550303] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.550309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.537 [2024-10-14 17:43:07.550318] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130ca80, cid 4, qid 0 00:26:08.537 [2024-10-14 17:43:07.550391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.537 [2024-10-14 17:43:07.550397] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.537 [2024-10-14 17:43:07.550400] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550403] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ac760): datao=0, datal=4096, cccid=4 00:26:08.537 [2024-10-14 17:43:07.550407] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x130ca80) on tqpair(0x12ac760): expected_datao=0, payload_size=4096 00:26:08.537 [2024-10-14 17:43:07.550411] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550421] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.550424] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.591728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.591731] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130ca80) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.591748] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:08.537 [2024-10-14 17:43:07.591773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.591784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.537 [2024-10-14 17:43:07.591790] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.591802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.537 [2024-10-14 17:43:07.591816] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130ca80, cid 4, qid 0 00:26:08.537 [2024-10-14 17:43:07.591820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130cc00, cid 5, qid 0 00:26:08.537 [2024-10-14 17:43:07.591919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.537 [2024-10-14 17:43:07.591925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.537 [2024-10-14 17:43:07.591928] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ac760): datao=0, datal=1024, cccid=4 00:26:08.537 [2024-10-14 17:43:07.591935] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x130ca80) on tqpair(0x12ac760): expected_datao=0, payload_size=1024 00:26:08.537 [2024-10-14 17:43:07.591939] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591944] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591948] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.591958] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.591961] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.591964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130cc00) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.636609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.636620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.636624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130ca80) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.636646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.636657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.537 [2024-10-14 17:43:07.636673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130ca80, cid 4, qid 0 00:26:08.537 [2024-10-14 17:43:07.636827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.537 [2024-10-14 17:43:07.636833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.537 [2024-10-14 17:43:07.636836] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ac760): datao=0, datal=3072, cccid=4 00:26:08.537 [2024-10-14 17:43:07.636843] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x130ca80) on tqpair(0x12ac760): expected_datao=0, payload_size=3072 00:26:08.537 [2024-10-14 17:43:07.636847] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636865] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636868] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636915] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.537 [2024-10-14 17:43:07.636920] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.537 [2024-10-14 17:43:07.636923] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636926] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130ca80) on tqpair=0x12ac760 00:26:08.537 [2024-10-14 17:43:07.636934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.537 [2024-10-14 17:43:07.636937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ac760) 00:26:08.537 [2024-10-14 17:43:07.636943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.537 [2024-10-14 17:43:07.636957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130ca80, cid 4, qid 0 00:26:08.537 [2024-10-14 17:43:07.637028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.537 [2024-10-14 17:43:07.637036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.538 [2024-10-14 17:43:07.637039] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.538 [2024-10-14 17:43:07.637042] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ac760): datao=0, datal=8, cccid=4 00:26:08.538 [2024-10-14 17:43:07.637046] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x130ca80) on tqpair(0x12ac760): expected_datao=0, payload_size=8 00:26:08.538 [2024-10-14 17:43:07.637050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.538 [2024-10-14 17:43:07.637055] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.538 [2024-10-14 17:43:07.637058] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.802 [2024-10-14 17:43:07.677778] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.802 [2024-10-14 17:43:07.677787] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.802 [2024-10-14 17:43:07.677790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.802 [2024-10-14 17:43:07.677793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130ca80) on tqpair=0x12ac760 00:26:08.802 ===================================================== 00:26:08.802 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:08.802 ===================================================== 00:26:08.802 Controller Capabilities/Features 00:26:08.802 ================================ 00:26:08.802 Vendor ID: 0000 00:26:08.802 Subsystem Vendor ID: 0000 00:26:08.802 Serial Number: .................... 00:26:08.802 Model Number: ........................................ 00:26:08.802 Firmware Version: 25.01 00:26:08.802 Recommended Arb Burst: 0 00:26:08.802 IEEE OUI Identifier: 00 00 00 00:26:08.802 Multi-path I/O 00:26:08.802 May have multiple subsystem ports: No 00:26:08.802 May have multiple controllers: No 00:26:08.802 Associated with SR-IOV VF: No 00:26:08.802 Max Data Transfer Size: 131072 00:26:08.802 Max Number of Namespaces: 0 00:26:08.802 Max Number of I/O Queues: 1024 00:26:08.802 NVMe Specification Version (VS): 1.3 00:26:08.802 NVMe Specification Version (Identify): 1.3 00:26:08.802 Maximum Queue Entries: 128 00:26:08.802 Contiguous Queues Required: Yes 00:26:08.802 Arbitration Mechanisms Supported 00:26:08.802 Weighted Round Robin: Not Supported 00:26:08.802 Vendor Specific: Not Supported 00:26:08.802 Reset Timeout: 15000 ms 00:26:08.802 Doorbell Stride: 4 bytes 00:26:08.802 NVM Subsystem Reset: Not Supported 00:26:08.802 Command Sets Supported 00:26:08.802 NVM Command Set: Supported 00:26:08.802 Boot Partition: Not Supported 00:26:08.802 Memory Page Size Minimum: 4096 bytes 00:26:08.802 Memory Page Size Maximum: 4096 bytes 00:26:08.802 Persistent Memory Region: Not Supported 00:26:08.802 Optional Asynchronous Events Supported 00:26:08.802 Namespace Attribute Notices: Not Supported 00:26:08.802 Firmware Activation Notices: Not Supported 00:26:08.802 ANA Change Notices: Not Supported 00:26:08.802 PLE Aggregate Log Change Notices: Not Supported 00:26:08.802 LBA Status Info Alert Notices: Not Supported 00:26:08.802 EGE Aggregate Log Change Notices: Not Supported 00:26:08.802 Normal NVM Subsystem Shutdown event: Not Supported 00:26:08.802 Zone Descriptor Change Notices: Not Supported 00:26:08.802 Discovery Log Change Notices: Supported 00:26:08.802 Controller Attributes 00:26:08.802 128-bit Host Identifier: Not Supported 00:26:08.802 Non-Operational Permissive Mode: Not Supported 00:26:08.802 NVM Sets: Not Supported 00:26:08.802 Read Recovery Levels: Not Supported 00:26:08.802 Endurance Groups: Not Supported 00:26:08.802 Predictable Latency Mode: Not Supported 00:26:08.802 Traffic Based Keep ALive: Not Supported 00:26:08.802 Namespace Granularity: Not Supported 00:26:08.802 SQ Associations: Not Supported 00:26:08.802 UUID List: Not Supported 00:26:08.803 Multi-Domain Subsystem: Not Supported 00:26:08.803 Fixed Capacity Management: Not Supported 00:26:08.803 Variable Capacity Management: Not Supported 00:26:08.803 Delete Endurance Group: Not Supported 00:26:08.803 Delete NVM Set: Not Supported 00:26:08.803 Extended LBA Formats Supported: Not Supported 00:26:08.803 Flexible Data Placement Supported: Not Supported 00:26:08.803 00:26:08.803 Controller Memory Buffer Support 00:26:08.803 ================================ 00:26:08.803 Supported: No 00:26:08.803 00:26:08.803 Persistent Memory Region Support 00:26:08.803 ================================ 00:26:08.803 Supported: No 00:26:08.803 00:26:08.803 Admin Command Set Attributes 00:26:08.803 ============================ 00:26:08.803 Security Send/Receive: Not Supported 00:26:08.803 Format NVM: Not Supported 00:26:08.803 Firmware Activate/Download: Not Supported 00:26:08.803 Namespace Management: Not Supported 00:26:08.803 Device Self-Test: Not Supported 00:26:08.803 Directives: Not Supported 00:26:08.803 NVMe-MI: Not Supported 00:26:08.803 Virtualization Management: Not Supported 00:26:08.803 Doorbell Buffer Config: Not Supported 00:26:08.803 Get LBA Status Capability: Not Supported 00:26:08.803 Command & Feature Lockdown Capability: Not Supported 00:26:08.803 Abort Command Limit: 1 00:26:08.803 Async Event Request Limit: 4 00:26:08.803 Number of Firmware Slots: N/A 00:26:08.803 Firmware Slot 1 Read-Only: N/A 00:26:08.803 Firmware Activation Without Reset: N/A 00:26:08.803 Multiple Update Detection Support: N/A 00:26:08.803 Firmware Update Granularity: No Information Provided 00:26:08.803 Per-Namespace SMART Log: No 00:26:08.803 Asymmetric Namespace Access Log Page: Not Supported 00:26:08.803 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:08.803 Command Effects Log Page: Not Supported 00:26:08.803 Get Log Page Extended Data: Supported 00:26:08.803 Telemetry Log Pages: Not Supported 00:26:08.803 Persistent Event Log Pages: Not Supported 00:26:08.803 Supported Log Pages Log Page: May Support 00:26:08.803 Commands Supported & Effects Log Page: Not Supported 00:26:08.803 Feature Identifiers & Effects Log Page:May Support 00:26:08.803 NVMe-MI Commands & Effects Log Page: May Support 00:26:08.803 Data Area 4 for Telemetry Log: Not Supported 00:26:08.803 Error Log Page Entries Supported: 128 00:26:08.803 Keep Alive: Not Supported 00:26:08.803 00:26:08.803 NVM Command Set Attributes 00:26:08.803 ========================== 00:26:08.803 Submission Queue Entry Size 00:26:08.803 Max: 1 00:26:08.803 Min: 1 00:26:08.803 Completion Queue Entry Size 00:26:08.803 Max: 1 00:26:08.803 Min: 1 00:26:08.803 Number of Namespaces: 0 00:26:08.803 Compare Command: Not Supported 00:26:08.803 Write Uncorrectable Command: Not Supported 00:26:08.803 Dataset Management Command: Not Supported 00:26:08.803 Write Zeroes Command: Not Supported 00:26:08.803 Set Features Save Field: Not Supported 00:26:08.803 Reservations: Not Supported 00:26:08.803 Timestamp: Not Supported 00:26:08.803 Copy: Not Supported 00:26:08.803 Volatile Write Cache: Not Present 00:26:08.803 Atomic Write Unit (Normal): 1 00:26:08.803 Atomic Write Unit (PFail): 1 00:26:08.803 Atomic Compare & Write Unit: 1 00:26:08.803 Fused Compare & Write: Supported 00:26:08.803 Scatter-Gather List 00:26:08.803 SGL Command Set: Supported 00:26:08.803 SGL Keyed: Supported 00:26:08.803 SGL Bit Bucket Descriptor: Not Supported 00:26:08.803 SGL Metadata Pointer: Not Supported 00:26:08.803 Oversized SGL: Not Supported 00:26:08.803 SGL Metadata Address: Not Supported 00:26:08.803 SGL Offset: Supported 00:26:08.803 Transport SGL Data Block: Not Supported 00:26:08.803 Replay Protected Memory Block: Not Supported 00:26:08.803 00:26:08.803 Firmware Slot Information 00:26:08.803 ========================= 00:26:08.803 Active slot: 0 00:26:08.803 00:26:08.803 00:26:08.803 Error Log 00:26:08.803 ========= 00:26:08.803 00:26:08.803 Active Namespaces 00:26:08.803 ================= 00:26:08.803 Discovery Log Page 00:26:08.803 ================== 00:26:08.803 Generation Counter: 2 00:26:08.803 Number of Records: 2 00:26:08.803 Record Format: 0 00:26:08.803 00:26:08.803 Discovery Log Entry 0 00:26:08.803 ---------------------- 00:26:08.803 Transport Type: 3 (TCP) 00:26:08.803 Address Family: 1 (IPv4) 00:26:08.803 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:08.803 Entry Flags: 00:26:08.803 Duplicate Returned Information: 1 00:26:08.803 Explicit Persistent Connection Support for Discovery: 1 00:26:08.803 Transport Requirements: 00:26:08.803 Secure Channel: Not Required 00:26:08.803 Port ID: 0 (0x0000) 00:26:08.803 Controller ID: 65535 (0xffff) 00:26:08.803 Admin Max SQ Size: 128 00:26:08.803 Transport Service Identifier: 4420 00:26:08.803 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:08.803 Transport Address: 10.0.0.2 00:26:08.803 Discovery Log Entry 1 00:26:08.803 ---------------------- 00:26:08.803 Transport Type: 3 (TCP) 00:26:08.803 Address Family: 1 (IPv4) 00:26:08.803 Subsystem Type: 2 (NVM Subsystem) 00:26:08.803 Entry Flags: 00:26:08.803 Duplicate Returned Information: 0 00:26:08.803 Explicit Persistent Connection Support for Discovery: 0 00:26:08.803 Transport Requirements: 00:26:08.803 Secure Channel: Not Required 00:26:08.803 Port ID: 0 (0x0000) 00:26:08.803 Controller ID: 65535 (0xffff) 00:26:08.803 Admin Max SQ Size: 128 00:26:08.803 Transport Service Identifier: 4420 00:26:08.803 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:08.803 Transport Address: 10.0.0.2 [2024-10-14 17:43:07.677866] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:08.803 [2024-10-14 17:43:07.677877] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c480) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.677884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.803 [2024-10-14 17:43:07.677888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c600) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.677892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.803 [2024-10-14 17:43:07.677897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c780) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.677900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.803 [2024-10-14 17:43:07.677905] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.677908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.803 [2024-10-14 17:43:07.677916] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.677919] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.677922] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.803 [2024-10-14 17:43:07.677929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.803 [2024-10-14 17:43:07.677942] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.803 [2024-10-14 17:43:07.678010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.803 [2024-10-14 17:43:07.678016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.803 [2024-10-14 17:43:07.678018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.678027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678031] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.803 [2024-10-14 17:43:07.678039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.803 [2024-10-14 17:43:07.678051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.803 [2024-10-14 17:43:07.678124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.803 [2024-10-14 17:43:07.678131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.803 [2024-10-14 17:43:07.678134] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.678141] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:08.803 [2024-10-14 17:43:07.678145] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:08.803 [2024-10-14 17:43:07.678156] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.803 [2024-10-14 17:43:07.678168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.803 [2024-10-14 17:43:07.678177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.803 [2024-10-14 17:43:07.678259] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.803 [2024-10-14 17:43:07.678265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.803 [2024-10-14 17:43:07.678268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678271] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.803 [2024-10-14 17:43:07.678279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678283] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.803 [2024-10-14 17:43:07.678286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.678410] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.678416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.678419] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678422] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.678430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.678564] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.678569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.678572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678575] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.678583] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678611] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.678677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.678682] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.678685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678689] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.678696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678717] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.678815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.678820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.678823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678826] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.678834] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678837] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678840] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678855] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.678914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.678919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.678922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678926] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.678933] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.678940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.678945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.678954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679038] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679041] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679055] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679123] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679272] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679275] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679278] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679290] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679376] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679379] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679408] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679488] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679492] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.804 [2024-10-14 17:43:07.679763] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679770] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.804 [2024-10-14 17:43:07.679775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.804 [2024-10-14 17:43:07.679784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.804 [2024-10-14 17:43:07.679873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.804 [2024-10-14 17:43:07.679879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.804 [2024-10-14 17:43:07.679882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.804 [2024-10-14 17:43:07.679885] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.679893] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.679896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.679899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.679905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.679914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.680024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.680029] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.680032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680035] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.680043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680047] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.680055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.680064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.680124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.680129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.680135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.680146] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680153] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.680158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.680167] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.680275] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.680281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.680284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680287] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.680295] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680298] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.680306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.680315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.680426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.680432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.680435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.680446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.680458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.680467] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.680527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.680533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.680536] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.680547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680550] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.680553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.680559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.680567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.684608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.684615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.684618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.684624] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.684633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.684637] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.684640] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ac760) 00:26:08.805 [2024-10-14 17:43:07.684646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.684656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x130c900, cid 3, qid 0 00:26:08.805 [2024-10-14 17:43:07.684842] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.684847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.684850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.684854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x130c900) on tqpair=0x12ac760 00:26:08.805 [2024-10-14 17:43:07.684860] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:26:08.805 00:26:08.805 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:08.805 [2024-10-14 17:43:07.721432] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:08.805 [2024-10-14 17:43:07.721465] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189271 ] 00:26:08.805 [2024-10-14 17:43:07.747325] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:08.805 [2024-10-14 17:43:07.747365] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:08.805 [2024-10-14 17:43:07.747370] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:08.805 [2024-10-14 17:43:07.747381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:08.805 [2024-10-14 17:43:07.747388] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:08.805 [2024-10-14 17:43:07.750833] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:08.805 [2024-10-14 17:43:07.750864] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1069760 0 00:26:08.805 [2024-10-14 17:43:07.758608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:08.805 [2024-10-14 17:43:07.758627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:08.805 [2024-10-14 17:43:07.758632] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:08.805 [2024-10-14 17:43:07.758635] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:08.805 [2024-10-14 17:43:07.758658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.758663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.758666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.805 [2024-10-14 17:43:07.758676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:08.805 [2024-10-14 17:43:07.758693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.805 [2024-10-14 17:43:07.766609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.766620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.766624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.805 [2024-10-14 17:43:07.766636] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:08.805 [2024-10-14 17:43:07.766642] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:08.805 [2024-10-14 17:43:07.766646] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:08.805 [2024-10-14 17:43:07.766656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.805 [2024-10-14 17:43:07.766670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.766682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.805 [2024-10-14 17:43:07.766840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.805 [2024-10-14 17:43:07.766846] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.805 [2024-10-14 17:43:07.766849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766852] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.805 [2024-10-14 17:43:07.766857] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:08.805 [2024-10-14 17:43:07.766863] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:08.805 [2024-10-14 17:43:07.766869] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766873] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.805 [2024-10-14 17:43:07.766876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.805 [2024-10-14 17:43:07.766881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.805 [2024-10-14 17:43:07.766891] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.805 [2024-10-14 17:43:07.766955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.766960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.766963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.766966] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.766971] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:08.806 [2024-10-14 17:43:07.766978] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.766983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.766987] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.766990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.766995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.806 [2024-10-14 17:43:07.767005] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767064] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767071] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767074] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767078] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767082] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.767090] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.806 [2024-10-14 17:43:07.767112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767169] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767178] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767185] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:08.806 [2024-10-14 17:43:07.767190] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.767197] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.767302] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:08.806 [2024-10-14 17:43:07.767305] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.767312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.806 [2024-10-14 17:43:07.767333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767399] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767409] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:08.806 [2024-10-14 17:43:07.767418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.806 [2024-10-14 17:43:07.767439] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767503] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767521] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:08.806 [2024-10-14 17:43:07.767525] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:08.806 [2024-10-14 17:43:07.767531] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:08.806 [2024-10-14 17:43:07.767541] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:08.806 [2024-10-14 17:43:07.767549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.806 [2024-10-14 17:43:07.767568] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.806 [2024-10-14 17:43:07.767673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.806 [2024-10-14 17:43:07.767676] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=4096, cccid=0 00:26:08.806 [2024-10-14 17:43:07.767683] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9480) on tqpair(0x1069760): expected_datao=0, payload_size=4096 00:26:08.806 [2024-10-14 17:43:07.767687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767693] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767697] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767729] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767738] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:08.806 [2024-10-14 17:43:07.767742] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:08.806 [2024-10-14 17:43:07.767745] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:08.806 [2024-10-14 17:43:07.767749] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:08.806 [2024-10-14 17:43:07.767753] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:08.806 [2024-10-14 17:43:07.767757] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:08.806 [2024-10-14 17:43:07.767768] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:08.806 [2024-10-14 17:43:07.767773] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767777] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767780] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.806 [2024-10-14 17:43:07.767798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.806 [2024-10-14 17:43:07.767858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.806 [2024-10-14 17:43:07.767864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.806 [2024-10-14 17:43:07.767867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.806 [2024-10-14 17:43:07.767875] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767879] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.806 [2024-10-14 17:43:07.767892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.806 [2024-10-14 17:43:07.767908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1069760) 00:26:08.806 [2024-10-14 17:43:07.767919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.806 [2024-10-14 17:43:07.767924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.806 [2024-10-14 17:43:07.767927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.767930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.767935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.807 [2024-10-14 17:43:07.767939] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.767948] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.767953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.767957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.767962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.807 [2024-10-14 17:43:07.767973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9480, cid 0, qid 0 00:26:08.807 [2024-10-14 17:43:07.767977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9600, cid 1, qid 0 00:26:08.807 [2024-10-14 17:43:07.767982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9780, cid 2, qid 0 00:26:08.807 [2024-10-14 17:43:07.767985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.807 [2024-10-14 17:43:07.767989] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.807 [2024-10-14 17:43:07.768086] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.807 [2024-10-14 17:43:07.768094] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.807 [2024-10-14 17:43:07.768097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.807 [2024-10-14 17:43:07.768105] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:08.807 [2024-10-14 17:43:07.768109] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768115] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768122] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.768139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:08.807 [2024-10-14 17:43:07.768149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.807 [2024-10-14 17:43:07.768215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.807 [2024-10-14 17:43:07.768220] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.807 [2024-10-14 17:43:07.768223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.807 [2024-10-14 17:43:07.768276] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768286] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.768300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.807 [2024-10-14 17:43:07.768310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.807 [2024-10-14 17:43:07.768386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.807 [2024-10-14 17:43:07.768392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.807 [2024-10-14 17:43:07.768395] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=4096, cccid=4 00:26:08.807 [2024-10-14 17:43:07.768402] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9a80) on tqpair(0x1069760): expected_datao=0, payload_size=4096 00:26:08.807 [2024-10-14 17:43:07.768405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768411] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768414] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.807 [2024-10-14 17:43:07.768428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.807 [2024-10-14 17:43:07.768431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.807 [2024-10-14 17:43:07.768446] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:08.807 [2024-10-14 17:43:07.768456] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768464] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768473] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.768478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.807 [2024-10-14 17:43:07.768488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.807 [2024-10-14 17:43:07.768573] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.807 [2024-10-14 17:43:07.768579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.807 [2024-10-14 17:43:07.768582] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=4096, cccid=4 00:26:08.807 [2024-10-14 17:43:07.768589] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9a80) on tqpair(0x1069760): expected_datao=0, payload_size=4096 00:26:08.807 [2024-10-14 17:43:07.768592] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768597] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768605] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768614] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.807 [2024-10-14 17:43:07.768619] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.807 [2024-10-14 17:43:07.768622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.807 [2024-10-14 17:43:07.768635] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768643] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.807 [2024-10-14 17:43:07.768658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.807 [2024-10-14 17:43:07.768668] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.807 [2024-10-14 17:43:07.768741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.807 [2024-10-14 17:43:07.768746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.807 [2024-10-14 17:43:07.768749] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768752] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=4096, cccid=4 00:26:08.807 [2024-10-14 17:43:07.768756] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9a80) on tqpair(0x1069760): expected_datao=0, payload_size=4096 00:26:08.807 [2024-10-14 17:43:07.768760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768776] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768780] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.807 [2024-10-14 17:43:07.768820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.807 [2024-10-14 17:43:07.768823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.807 [2024-10-14 17:43:07.768832] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768839] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768846] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768851] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768856] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768860] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768864] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:08.807 [2024-10-14 17:43:07.768868] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:08.807 [2024-10-14 17:43:07.768873] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:08.807 [2024-10-14 17:43:07.768884] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.807 [2024-10-14 17:43:07.768888] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.768893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.768898] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.768902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.768905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.768910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.808 [2024-10-14 17:43:07.768920] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.808 [2024-10-14 17:43:07.768925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9c00, cid 5, qid 0 00:26:08.808 [2024-10-14 17:43:07.769008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769019] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769025] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769030] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769033] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9c00) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769044] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769048] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9c00, cid 5, qid 0 00:26:08.808 [2024-10-14 17:43:07.769123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769135] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9c00) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769147] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9c00, cid 5, qid 0 00:26:08.808 [2024-10-14 17:43:07.769223] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9c00) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769260] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9c00, cid 5, qid 0 00:26:08.808 [2024-10-14 17:43:07.769324] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769330] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9c00) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769348] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769377] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769380] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769394] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1069760) 00:26:08.808 [2024-10-14 17:43:07.769399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.808 [2024-10-14 17:43:07.769411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9c00, cid 5, qid 0 00:26:08.808 [2024-10-14 17:43:07.769415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9a80, cid 4, qid 0 00:26:08.808 [2024-10-14 17:43:07.769419] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9d80, cid 6, qid 0 00:26:08.808 [2024-10-14 17:43:07.769423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9f00, cid 7, qid 0 00:26:08.808 [2024-10-14 17:43:07.769562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.808 [2024-10-14 17:43:07.769567] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.808 [2024-10-14 17:43:07.769570] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769573] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=8192, cccid=5 00:26:08.808 [2024-10-14 17:43:07.769577] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9c00) on tqpair(0x1069760): expected_datao=0, payload_size=8192 00:26:08.808 [2024-10-14 17:43:07.769581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769592] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769595] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.808 [2024-10-14 17:43:07.769613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.808 [2024-10-14 17:43:07.769616] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769619] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=512, cccid=4 00:26:08.808 [2024-10-14 17:43:07.769623] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9a80) on tqpair(0x1069760): expected_datao=0, payload_size=512 00:26:08.808 [2024-10-14 17:43:07.769626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769632] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769635] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.808 [2024-10-14 17:43:07.769644] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.808 [2024-10-14 17:43:07.769647] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=512, cccid=6 00:26:08.808 [2024-10-14 17:43:07.769654] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9d80) on tqpair(0x1069760): expected_datao=0, payload_size=512 00:26:08.808 [2024-10-14 17:43:07.769657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769662] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769665] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769670] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:08.808 [2024-10-14 17:43:07.769675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:08.808 [2024-10-14 17:43:07.769678] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769680] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1069760): datao=0, datal=4096, cccid=7 00:26:08.808 [2024-10-14 17:43:07.769684] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10c9f00) on tqpair(0x1069760): expected_datao=0, payload_size=4096 00:26:08.808 [2024-10-14 17:43:07.769688] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769693] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769696] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769703] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769708] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769713] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769716] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9c00) on tqpair=0x1069760 00:26:08.808 [2024-10-14 17:43:07.769725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.808 [2024-10-14 17:43:07.769730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.808 [2024-10-14 17:43:07.769733] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.808 [2024-10-14 17:43:07.769736] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9a80) on tqpair=0x1069760 00:26:08.809 [2024-10-14 17:43:07.769744] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.809 [2024-10-14 17:43:07.769749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.809 [2024-10-14 17:43:07.769752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.809 [2024-10-14 17:43:07.769755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9d80) on tqpair=0x1069760 00:26:08.809 [2024-10-14 17:43:07.769761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.809 [2024-10-14 17:43:07.769766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.809 [2024-10-14 17:43:07.769769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.809 [2024-10-14 17:43:07.769772] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9f00) on tqpair=0x1069760 00:26:08.809 ===================================================== 00:26:08.809 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.809 ===================================================== 00:26:08.809 Controller Capabilities/Features 00:26:08.809 ================================ 00:26:08.809 Vendor ID: 8086 00:26:08.809 Subsystem Vendor ID: 8086 00:26:08.809 Serial Number: SPDK00000000000001 00:26:08.809 Model Number: SPDK bdev Controller 00:26:08.809 Firmware Version: 25.01 00:26:08.809 Recommended Arb Burst: 6 00:26:08.809 IEEE OUI Identifier: e4 d2 5c 00:26:08.809 Multi-path I/O 00:26:08.809 May have multiple subsystem ports: Yes 00:26:08.809 May have multiple controllers: Yes 00:26:08.809 Associated with SR-IOV VF: No 00:26:08.809 Max Data Transfer Size: 131072 00:26:08.809 Max Number of Namespaces: 32 00:26:08.809 Max Number of I/O Queues: 127 00:26:08.809 NVMe Specification Version (VS): 1.3 00:26:08.809 NVMe Specification Version (Identify): 1.3 00:26:08.809 Maximum Queue Entries: 128 00:26:08.809 Contiguous Queues Required: Yes 00:26:08.809 Arbitration Mechanisms Supported 00:26:08.809 Weighted Round Robin: Not Supported 00:26:08.809 Vendor Specific: Not Supported 00:26:08.809 Reset Timeout: 15000 ms 00:26:08.809 Doorbell Stride: 4 bytes 00:26:08.809 NVM Subsystem Reset: Not Supported 00:26:08.809 Command Sets Supported 00:26:08.809 NVM Command Set: Supported 00:26:08.809 Boot Partition: Not Supported 00:26:08.809 Memory Page Size Minimum: 4096 bytes 00:26:08.809 Memory Page Size Maximum: 4096 bytes 00:26:08.809 Persistent Memory Region: Not Supported 00:26:08.809 Optional Asynchronous Events Supported 00:26:08.809 Namespace Attribute Notices: Supported 00:26:08.809 Firmware Activation Notices: Not Supported 00:26:08.809 ANA Change Notices: Not Supported 00:26:08.809 PLE Aggregate Log Change Notices: Not Supported 00:26:08.809 LBA Status Info Alert Notices: Not Supported 00:26:08.809 EGE Aggregate Log Change Notices: Not Supported 00:26:08.809 Normal NVM Subsystem Shutdown event: Not Supported 00:26:08.809 Zone Descriptor Change Notices: Not Supported 00:26:08.809 Discovery Log Change Notices: Not Supported 00:26:08.809 Controller Attributes 00:26:08.809 128-bit Host Identifier: Supported 00:26:08.809 Non-Operational Permissive Mode: Not Supported 00:26:08.809 NVM Sets: Not Supported 00:26:08.809 Read Recovery Levels: Not Supported 00:26:08.809 Endurance Groups: Not Supported 00:26:08.809 Predictable Latency Mode: Not Supported 00:26:08.809 Traffic Based Keep ALive: Not Supported 00:26:08.809 Namespace Granularity: Not Supported 00:26:08.809 SQ Associations: Not Supported 00:26:08.809 UUID List: Not Supported 00:26:08.809 Multi-Domain Subsystem: Not Supported 00:26:08.809 Fixed Capacity Management: Not Supported 00:26:08.809 Variable Capacity Management: Not Supported 00:26:08.809 Delete Endurance Group: Not Supported 00:26:08.809 Delete NVM Set: Not Supported 00:26:08.809 Extended LBA Formats Supported: Not Supported 00:26:08.809 Flexible Data Placement Supported: Not Supported 00:26:08.809 00:26:08.809 Controller Memory Buffer Support 00:26:08.809 ================================ 00:26:08.809 Supported: No 00:26:08.809 00:26:08.809 Persistent Memory Region Support 00:26:08.809 ================================ 00:26:08.809 Supported: No 00:26:08.809 00:26:08.809 Admin Command Set Attributes 00:26:08.809 ============================ 00:26:08.809 Security Send/Receive: Not Supported 00:26:08.809 Format NVM: Not Supported 00:26:08.809 Firmware Activate/Download: Not Supported 00:26:08.809 Namespace Management: Not Supported 00:26:08.809 Device Self-Test: Not Supported 00:26:08.809 Directives: Not Supported 00:26:08.809 NVMe-MI: Not Supported 00:26:08.809 Virtualization Management: Not Supported 00:26:08.809 Doorbell Buffer Config: Not Supported 00:26:08.809 Get LBA Status Capability: Not Supported 00:26:08.809 Command & Feature Lockdown Capability: Not Supported 00:26:08.809 Abort Command Limit: 4 00:26:08.809 Async Event Request Limit: 4 00:26:08.809 Number of Firmware Slots: N/A 00:26:08.809 Firmware Slot 1 Read-Only: N/A 00:26:08.809 Firmware Activation Without Reset: N/A 00:26:08.809 Multiple Update Detection Support: N/A 00:26:08.809 Firmware Update Granularity: No Information Provided 00:26:08.809 Per-Namespace SMART Log: No 00:26:08.809 Asymmetric Namespace Access Log Page: Not Supported 00:26:08.809 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:08.809 Command Effects Log Page: Supported 00:26:08.809 Get Log Page Extended Data: Supported 00:26:08.809 Telemetry Log Pages: Not Supported 00:26:08.809 Persistent Event Log Pages: Not Supported 00:26:08.809 Supported Log Pages Log Page: May Support 00:26:08.809 Commands Supported & Effects Log Page: Not Supported 00:26:08.809 Feature Identifiers & Effects Log Page:May Support 00:26:08.809 NVMe-MI Commands & Effects Log Page: May Support 00:26:08.809 Data Area 4 for Telemetry Log: Not Supported 00:26:08.809 Error Log Page Entries Supported: 128 00:26:08.809 Keep Alive: Supported 00:26:08.809 Keep Alive Granularity: 10000 ms 00:26:08.809 00:26:08.809 NVM Command Set Attributes 00:26:08.809 ========================== 00:26:08.809 Submission Queue Entry Size 00:26:08.809 Max: 64 00:26:08.809 Min: 64 00:26:08.809 Completion Queue Entry Size 00:26:08.809 Max: 16 00:26:08.809 Min: 16 00:26:08.809 Number of Namespaces: 32 00:26:08.809 Compare Command: Supported 00:26:08.809 Write Uncorrectable Command: Not Supported 00:26:08.809 Dataset Management Command: Supported 00:26:08.809 Write Zeroes Command: Supported 00:26:08.809 Set Features Save Field: Not Supported 00:26:08.809 Reservations: Supported 00:26:08.809 Timestamp: Not Supported 00:26:08.809 Copy: Supported 00:26:08.809 Volatile Write Cache: Present 00:26:08.809 Atomic Write Unit (Normal): 1 00:26:08.809 Atomic Write Unit (PFail): 1 00:26:08.809 Atomic Compare & Write Unit: 1 00:26:08.809 Fused Compare & Write: Supported 00:26:08.809 Scatter-Gather List 00:26:08.809 SGL Command Set: Supported 00:26:08.809 SGL Keyed: Supported 00:26:08.809 SGL Bit Bucket Descriptor: Not Supported 00:26:08.809 SGL Metadata Pointer: Not Supported 00:26:08.809 Oversized SGL: Not Supported 00:26:08.809 SGL Metadata Address: Not Supported 00:26:08.809 SGL Offset: Supported 00:26:08.809 Transport SGL Data Block: Not Supported 00:26:08.809 Replay Protected Memory Block: Not Supported 00:26:08.809 00:26:08.809 Firmware Slot Information 00:26:08.809 ========================= 00:26:08.809 Active slot: 1 00:26:08.809 Slot 1 Firmware Revision: 25.01 00:26:08.809 00:26:08.809 00:26:08.809 Commands Supported and Effects 00:26:08.809 ============================== 00:26:08.809 Admin Commands 00:26:08.809 -------------- 00:26:08.809 Get Log Page (02h): Supported 00:26:08.809 Identify (06h): Supported 00:26:08.809 Abort (08h): Supported 00:26:08.809 Set Features (09h): Supported 00:26:08.809 Get Features (0Ah): Supported 00:26:08.809 Asynchronous Event Request (0Ch): Supported 00:26:08.809 Keep Alive (18h): Supported 00:26:08.809 I/O Commands 00:26:08.809 ------------ 00:26:08.809 Flush (00h): Supported LBA-Change 00:26:08.809 Write (01h): Supported LBA-Change 00:26:08.809 Read (02h): Supported 00:26:08.809 Compare (05h): Supported 00:26:08.809 Write Zeroes (08h): Supported LBA-Change 00:26:08.809 Dataset Management (09h): Supported LBA-Change 00:26:08.809 Copy (19h): Supported LBA-Change 00:26:08.809 00:26:08.809 Error Log 00:26:08.809 ========= 00:26:08.809 00:26:08.809 Arbitration 00:26:08.809 =========== 00:26:08.809 Arbitration Burst: 1 00:26:08.809 00:26:08.809 Power Management 00:26:08.809 ================ 00:26:08.809 Number of Power States: 1 00:26:08.809 Current Power State: Power State #0 00:26:08.809 Power State #0: 00:26:08.809 Max Power: 0.00 W 00:26:08.809 Non-Operational State: Operational 00:26:08.809 Entry Latency: Not Reported 00:26:08.809 Exit Latency: Not Reported 00:26:08.809 Relative Read Throughput: 0 00:26:08.809 Relative Read Latency: 0 00:26:08.809 Relative Write Throughput: 0 00:26:08.809 Relative Write Latency: 0 00:26:08.809 Idle Power: Not Reported 00:26:08.809 Active Power: Not Reported 00:26:08.809 Non-Operational Permissive Mode: Not Supported 00:26:08.809 00:26:08.809 Health Information 00:26:08.809 ================== 00:26:08.809 Critical Warnings: 00:26:08.809 Available Spare Space: OK 00:26:08.809 Temperature: OK 00:26:08.809 Device Reliability: OK 00:26:08.809 Read Only: No 00:26:08.809 Volatile Memory Backup: OK 00:26:08.809 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:08.809 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:08.810 Available Spare: 0% 00:26:08.810 Available Spare Threshold: 0% 00:26:08.810 Life Percentage Used:[2024-10-14 17:43:07.769853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.769858] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.769864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.769875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9f00, cid 7, qid 0 00:26:08.810 [2024-10-14 17:43:07.769949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.769955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.769958] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.769961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9f00) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.769986] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:08.810 [2024-10-14 17:43:07.769995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9480) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.810 [2024-10-14 17:43:07.770004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9600) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.810 [2024-10-14 17:43:07.770013] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9780) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.810 [2024-10-14 17:43:07.770021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.810 [2024-10-14 17:43:07.770031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.770117] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.770123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.770126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770138] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.770229] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.770235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.770237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770245] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:08.810 [2024-10-14 17:43:07.770248] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:08.810 [2024-10-14 17:43:07.770256] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.770339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.770345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.770348] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770380] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.770440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.770446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.770449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770460] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770483] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.770545] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.770551] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.770554] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.770565] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.770571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.770577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.770586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.774609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.774618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.774621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.774624] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.774633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.774637] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.774640] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1069760) 00:26:08.810 [2024-10-14 17:43:07.774646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.810 [2024-10-14 17:43:07.774656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10c9900, cid 3, qid 0 00:26:08.810 [2024-10-14 17:43:07.774800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:08.810 [2024-10-14 17:43:07.774806] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:08.810 [2024-10-14 17:43:07.774809] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:08.810 [2024-10-14 17:43:07.774812] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10c9900) on tqpair=0x1069760 00:26:08.810 [2024-10-14 17:43:07.774818] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:26:08.810 0% 00:26:08.810 Data Units Read: 0 00:26:08.810 Data Units Written: 0 00:26:08.810 Host Read Commands: 0 00:26:08.810 Host Write Commands: 0 00:26:08.810 Controller Busy Time: 0 minutes 00:26:08.810 Power Cycles: 0 00:26:08.810 Power On Hours: 0 hours 00:26:08.810 Unsafe Shutdowns: 0 00:26:08.810 Unrecoverable Media Errors: 0 00:26:08.810 Lifetime Error Log Entries: 0 00:26:08.810 Warning Temperature Time: 0 minutes 00:26:08.810 Critical Temperature Time: 0 minutes 00:26:08.810 00:26:08.810 Number of Queues 00:26:08.810 ================ 00:26:08.810 Number of I/O Submission Queues: 127 00:26:08.810 Number of I/O Completion Queues: 127 00:26:08.810 00:26:08.810 Active Namespaces 00:26:08.810 ================= 00:26:08.810 Namespace ID:1 00:26:08.810 Error Recovery Timeout: Unlimited 00:26:08.810 Command Set Identifier: NVM (00h) 00:26:08.810 Deallocate: Supported 00:26:08.810 Deallocated/Unwritten Error: Not Supported 00:26:08.810 Deallocated Read Value: Unknown 00:26:08.810 Deallocate in Write Zeroes: Not Supported 00:26:08.810 Deallocated Guard Field: 0xFFFF 00:26:08.810 Flush: Supported 00:26:08.810 Reservation: Supported 00:26:08.810 Namespace Sharing Capabilities: Multiple Controllers 00:26:08.810 Size (in LBAs): 131072 (0GiB) 00:26:08.810 Capacity (in LBAs): 131072 (0GiB) 00:26:08.810 Utilization (in LBAs): 131072 (0GiB) 00:26:08.810 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:08.810 EUI64: ABCDEF0123456789 00:26:08.810 UUID: 056d7222-cdf2-4770-bbf8-73fe7d0b54bd 00:26:08.810 Thin Provisioning: Not Supported 00:26:08.810 Per-NS Atomic Units: Yes 00:26:08.810 Atomic Boundary Size (Normal): 0 00:26:08.810 Atomic Boundary Size (PFail): 0 00:26:08.810 Atomic Boundary Offset: 0 00:26:08.810 Maximum Single Source Range Length: 65535 00:26:08.810 Maximum Copy Length: 65535 00:26:08.810 Maximum Source Range Count: 1 00:26:08.810 NGUID/EUI64 Never Reused: No 00:26:08.810 Namespace Write Protected: No 00:26:08.810 Number of LBA Formats: 1 00:26:08.810 Current LBA Format: LBA Format #00 00:26:08.810 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:08.810 00:26:08.810 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.811 rmmod nvme_tcp 00:26:08.811 rmmod nvme_fabrics 00:26:08.811 rmmod nvme_keyring 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1189145 ']' 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1189145 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1189145 ']' 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1189145 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1189145 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1189145' 00:26:08.811 killing process with pid 1189145 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1189145 00:26:08.811 17:43:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1189145 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.070 17:43:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.607 00:26:11.607 real 0m9.338s 00:26:11.607 user 0m5.235s 00:26:11.607 sys 0m4.895s 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:11.607 ************************************ 00:26:11.607 END TEST nvmf_identify 00:26:11.607 ************************************ 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.607 ************************************ 00:26:11.607 START TEST nvmf_perf 00:26:11.607 ************************************ 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:11.607 * Looking for test storage... 00:26:11.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:11.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.607 --rc genhtml_branch_coverage=1 00:26:11.607 --rc genhtml_function_coverage=1 00:26:11.607 --rc genhtml_legend=1 00:26:11.607 --rc geninfo_all_blocks=1 00:26:11.607 --rc geninfo_unexecuted_blocks=1 00:26:11.607 00:26:11.607 ' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:11.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.607 --rc genhtml_branch_coverage=1 00:26:11.607 --rc genhtml_function_coverage=1 00:26:11.607 --rc genhtml_legend=1 00:26:11.607 --rc geninfo_all_blocks=1 00:26:11.607 --rc geninfo_unexecuted_blocks=1 00:26:11.607 00:26:11.607 ' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:11.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.607 --rc genhtml_branch_coverage=1 00:26:11.607 --rc genhtml_function_coverage=1 00:26:11.607 --rc genhtml_legend=1 00:26:11.607 --rc geninfo_all_blocks=1 00:26:11.607 --rc geninfo_unexecuted_blocks=1 00:26:11.607 00:26:11.607 ' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:11.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.607 --rc genhtml_branch_coverage=1 00:26:11.607 --rc genhtml_function_coverage=1 00:26:11.607 --rc genhtml_legend=1 00:26:11.607 --rc geninfo_all_blocks=1 00:26:11.607 --rc geninfo_unexecuted_blocks=1 00:26:11.607 00:26:11.607 ' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.607 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.608 17:43:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:18.177 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:18.177 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.177 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:18.178 Found net devices under 0000:86:00.0: cvl_0_0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:18.178 Found net devices under 0000:86:00.1: cvl_0_1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:18.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:26:18.178 00:26:18.178 --- 10.0.0.2 ping statistics --- 00:26:18.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.178 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:18.178 00:26:18.178 --- 10.0.0.1 ping statistics --- 00:26:18.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.178 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1192789 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1192789 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1192789 ']' 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.178 [2024-10-14 17:43:16.479426] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:18.178 [2024-10-14 17:43:16.479474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.178 [2024-10-14 17:43:16.551929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.178 [2024-10-14 17:43:16.594622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.178 [2024-10-14 17:43:16.594659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.178 [2024-10-14 17:43:16.594667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.178 [2024-10-14 17:43:16.594672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.178 [2024-10-14 17:43:16.594677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.178 [2024-10-14 17:43:16.596222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.178 [2024-10-14 17:43:16.596254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.178 [2024-10-14 17:43:16.596365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.178 [2024-10-14 17:43:16.596365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:18.178 17:43:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:20.712 17:43:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:20.712 17:43:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:20.971 17:43:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:26:20.971 17:43:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:21.230 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:21.230 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:26:21.230 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:21.230 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:21.230 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:21.489 [2024-10-14 17:43:20.396101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.489 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:21.489 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:21.489 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:21.748 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:21.748 17:43:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:22.007 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.266 [2024-10-14 17:43:21.183006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.266 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:22.525 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:26:22.525 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:22.525 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:22.525 17:43:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:23.902 Initializing NVMe Controllers 00:26:23.902 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:26:23.902 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:26:23.902 Initialization complete. Launching workers. 00:26:23.902 ======================================================== 00:26:23.902 Latency(us) 00:26:23.902 Device Information : IOPS MiB/s Average min max 00:26:23.902 PCIE (0000:5e:00.0) NSID 1 from core 0: 97629.25 381.36 327.31 34.30 7225.32 00:26:23.902 ======================================================== 00:26:23.902 Total : 97629.25 381.36 327.31 34.30 7225.32 00:26:23.902 00:26:23.902 17:43:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:25.279 Initializing NVMe Controllers 00:26:25.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:25.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:25.279 Initialization complete. Launching workers. 00:26:25.279 ======================================================== 00:26:25.279 Latency(us) 00:26:25.279 Device Information : IOPS MiB/s Average min max 00:26:25.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.67 0.37 10903.84 107.78 45691.95 00:26:25.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.83 0.19 21223.39 6985.38 47886.53 00:26:25.279 ======================================================== 00:26:25.279 Total : 142.50 0.56 14367.74 107.78 47886.53 00:26:25.279 00:26:25.279 17:43:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.215 Initializing NVMe Controllers 00:26:26.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:26.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:26.215 Initialization complete. Launching workers. 00:26:26.215 ======================================================== 00:26:26.215 Latency(us) 00:26:26.215 Device Information : IOPS MiB/s Average min max 00:26:26.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11229.46 43.87 2849.20 451.74 7728.86 00:26:26.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.32 15.13 8296.25 6268.07 22412.17 00:26:26.215 ======================================================== 00:26:26.215 Total : 15103.79 59.00 4246.44 451.74 22412.17 00:26:26.215 00:26:26.215 17:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:26.215 17:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:26.216 17:43:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:28.750 Initializing NVMe Controllers 00:26:28.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.750 Controller IO queue size 128, less than required. 00:26:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.750 Controller IO queue size 128, less than required. 00:26:28.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:28.750 Initialization complete. Launching workers. 00:26:28.750 ======================================================== 00:26:28.750 Latency(us) 00:26:28.750 Device Information : IOPS MiB/s Average min max 00:26:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1813.83 453.46 72177.82 46048.40 117909.03 00:26:28.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.94 152.74 211445.20 79834.08 342975.39 00:26:28.750 ======================================================== 00:26:28.750 Total : 2424.77 606.19 107267.46 46048.40 342975.39 00:26:28.750 00:26:28.750 17:43:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:29.009 No valid NVMe controllers or AIO or URING devices found 00:26:29.009 Initializing NVMe Controllers 00:26:29.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.009 Controller IO queue size 128, less than required. 00:26:29.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:29.009 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:29.009 Controller IO queue size 128, less than required. 00:26:29.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:29.009 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:29.009 WARNING: Some requested NVMe devices were skipped 00:26:29.009 17:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:31.544 Initializing NVMe Controllers 00:26:31.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.544 Controller IO queue size 128, less than required. 00:26:31.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:31.544 Controller IO queue size 128, less than required. 00:26:31.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:31.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:31.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:31.544 Initialization complete. Launching workers. 00:26:31.544 00:26:31.544 ==================== 00:26:31.544 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:31.544 TCP transport: 00:26:31.544 polls: 13395 00:26:31.544 idle_polls: 9836 00:26:31.544 sock_completions: 3559 00:26:31.544 nvme_completions: 6357 00:26:31.544 submitted_requests: 9482 00:26:31.544 queued_requests: 1 00:26:31.544 00:26:31.544 ==================== 00:26:31.544 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:31.544 TCP transport: 00:26:31.544 polls: 17475 00:26:31.544 idle_polls: 12385 00:26:31.544 sock_completions: 5090 00:26:31.544 nvme_completions: 6331 00:26:31.544 submitted_requests: 9420 00:26:31.544 queued_requests: 1 00:26:31.544 ======================================================== 00:26:31.544 Latency(us) 00:26:31.544 Device Information : IOPS MiB/s Average min max 00:26:31.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1586.35 396.59 82700.72 52487.36 133981.15 00:26:31.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1579.86 394.97 81247.58 47292.77 125358.85 00:26:31.544 ======================================================== 00:26:31.544 Total : 3166.22 791.55 81975.64 47292.77 133981.15 00:26:31.544 00:26:31.544 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:31.544 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.804 rmmod nvme_tcp 00:26:31.804 rmmod nvme_fabrics 00:26:31.804 rmmod nvme_keyring 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1192789 ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1192789 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1192789 ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1192789 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1192789 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1192789' 00:26:31.804 killing process with pid 1192789 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1192789 00:26:31.804 17:43:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1192789 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.710 17:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.307 17:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:36.307 00:26:36.307 real 0m24.613s 00:26:36.307 user 1m4.358s 00:26:36.307 sys 0m8.272s 00:26:36.307 17:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:36.307 17:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.307 ************************************ 00:26:36.307 END TEST nvmf_perf 00:26:36.307 ************************************ 00:26:36.308 17:43:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:36.308 17:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:36.308 17:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:36.308 17:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.308 ************************************ 00:26:36.308 START TEST nvmf_fio_host 00:26:36.308 ************************************ 00:26:36.308 17:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:36.308 * Looking for test storage... 00:26:36.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.308 --rc genhtml_branch_coverage=1 00:26:36.308 --rc genhtml_function_coverage=1 00:26:36.308 --rc genhtml_legend=1 00:26:36.308 --rc geninfo_all_blocks=1 00:26:36.308 --rc geninfo_unexecuted_blocks=1 00:26:36.308 00:26:36.308 ' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.308 --rc genhtml_branch_coverage=1 00:26:36.308 --rc genhtml_function_coverage=1 00:26:36.308 --rc genhtml_legend=1 00:26:36.308 --rc geninfo_all_blocks=1 00:26:36.308 --rc geninfo_unexecuted_blocks=1 00:26:36.308 00:26:36.308 ' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.308 --rc genhtml_branch_coverage=1 00:26:36.308 --rc genhtml_function_coverage=1 00:26:36.308 --rc genhtml_legend=1 00:26:36.308 --rc geninfo_all_blocks=1 00:26:36.308 --rc geninfo_unexecuted_blocks=1 00:26:36.308 00:26:36.308 ' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:36.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.308 --rc genhtml_branch_coverage=1 00:26:36.308 --rc genhtml_function_coverage=1 00:26:36.308 --rc genhtml_legend=1 00:26:36.308 --rc geninfo_all_blocks=1 00:26:36.308 --rc geninfo_unexecuted_blocks=1 00:26:36.308 00:26:36.308 ' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.308 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:36.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:36.309 17:43:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:42.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:42.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:42.876 Found net devices under 0000:86:00.0: cvl_0_0 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:42.876 Found net devices under 0000:86:00.1: cvl_0_1 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:42.876 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.877 17:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:26:42.877 00:26:42.877 --- 10.0.0.2 ping statistics --- 00:26:42.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.877 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:26:42.877 00:26:42.877 --- 10.0.0.1 ping statistics --- 00:26:42.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.877 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1198900 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1198900 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1198900 ']' 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.877 [2024-10-14 17:43:41.133768] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:42.877 [2024-10-14 17:43:41.133812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.877 [2024-10-14 17:43:41.205674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.877 [2024-10-14 17:43:41.248374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.877 [2024-10-14 17:43:41.248413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.877 [2024-10-14 17:43:41.248420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.877 [2024-10-14 17:43:41.248426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.877 [2024-10-14 17:43:41.248430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.877 [2024-10-14 17:43:41.250029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.877 [2024-10-14 17:43:41.250137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.877 [2024-10-14 17:43:41.250169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.877 [2024-10-14 17:43:41.250170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:42.877 [2024-10-14 17:43:41.524006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:42.877 Malloc1 00:26:42.877 17:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.877 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.136 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.395 [2024-10-14 17:43:42.349753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.395 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:43.654 17:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.913 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:43.913 fio-3.35 00:26:43.913 Starting 1 thread 00:26:46.451 00:26:46.451 test: (groupid=0, jobs=1): err= 0: pid=1199491: Mon Oct 14 17:43:45 2024 00:26:46.451 read: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:26:46.451 slat (nsec): min=1499, max=239435, avg=1697.10, stdev=2248.98 00:26:46.451 clat (usec): min=3116, max=10032, avg=5948.23, stdev=462.22 00:26:46.451 lat (usec): min=3154, max=10034, avg=5949.92, stdev=462.20 00:26:46.451 clat percentiles (usec): 00:26:46.451 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:26:46.451 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:26:46.451 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:26:46.451 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9503], 00:26:46.451 | 99.99th=[ 9896] 00:26:46.451 bw ( KiB/s): min=46736, max=48128, per=99.95%, avg=47548.00, stdev=631.90, samples=4 00:26:46.451 iops : min=11684, max=12032, avg=11887.00, stdev=157.97, samples=4 00:26:46.451 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:26:46.451 slat (nsec): min=1559, max=243477, avg=1769.00, stdev=1735.53 00:26:46.451 clat (usec): min=2452, max=9420, avg=4797.42, stdev=381.47 00:26:46.451 lat (usec): min=2467, max=9422, avg=4799.19, stdev=381.54 00:26:46.451 clat percentiles (usec): 00:26:46.451 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:26:46.451 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:26:46.451 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:26:46.451 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 8848], 00:26:46.451 | 99.99th=[ 9372] 00:26:46.451 bw ( KiB/s): min=47016, max=47704, per=100.00%, avg=47358.00, stdev=286.32, samples=4 00:26:46.451 iops : min=11754, max=11926, avg=11839.50, stdev=71.58, samples=4 00:26:46.451 lat (msec) : 4=0.66%, 10=99.34%, 20=0.01% 00:26:46.452 cpu : usr=72.41%, sys=26.60%, ctx=80, majf=0, minf=3 00:26:46.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:46.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:46.452 issued rwts: total=23845,23735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:46.452 00:26:46.452 Run status group 0 (all jobs): 00:26:46.452 READ: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.7MB), run=2005-2005msec 00:26:46.452 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:46.452 17:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.452 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:46.452 fio-3.35 00:26:46.452 Starting 1 thread 00:26:48.993 00:26:48.993 test: (groupid=0, jobs=1): err= 0: pid=1200024: Mon Oct 14 17:43:47 2024 00:26:48.993 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2004msec) 00:26:48.993 slat (usec): min=2, max=110, avg= 2.87, stdev= 1.63 00:26:48.993 clat (usec): min=2311, max=49721, avg=6953.22, stdev=3389.28 00:26:48.993 lat (usec): min=2314, max=49723, avg=6956.08, stdev=3389.34 00:26:48.993 clat percentiles (usec): 00:26:48.993 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5342], 00:26:48.993 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:26:48.993 | 70.00th=[ 7504], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9372], 00:26:48.993 | 99.00th=[11338], 99.50th=[43779], 99.90th=[48497], 99.95th=[49546], 00:26:48.993 | 99.99th=[49546] 00:26:48.993 bw ( KiB/s): min=80864, max=95872, per=50.80%, avg=87424.00, stdev=7518.89, samples=4 00:26:48.993 iops : min= 5054, max= 5992, avg=5464.00, stdev=469.93, samples=4 00:26:48.993 write: IOPS=6432, BW=101MiB/s (105MB/s)(179MiB/1784msec); 0 zone resets 00:26:48.993 slat (usec): min=28, max=390, avg=31.99, stdev= 8.47 00:26:48.993 clat (usec): min=3272, max=15618, avg=8626.93, stdev=1481.16 00:26:48.993 lat (usec): min=3302, max=15648, avg=8658.93, stdev=1483.15 00:26:48.993 clat percentiles (usec): 00:26:48.993 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:26:48.993 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:26:48.993 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11469], 00:26:48.993 | 99.00th=[12780], 99.50th=[13698], 99.90th=[14746], 99.95th=[15270], 00:26:48.993 | 99.99th=[15533] 00:26:48.993 bw ( KiB/s): min=83296, max=99712, per=88.65%, avg=91240.00, stdev=7591.69, samples=4 00:26:48.993 iops : min= 5206, max= 6232, avg=5702.50, stdev=474.48, samples=4 00:26:48.993 lat (msec) : 4=1.64%, 10=90.77%, 20=7.21%, 50=0.38% 00:26:48.993 cpu : usr=81.13%, sys=15.68%, ctx=205, majf=0, minf=3 00:26:48.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:48.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.993 issued rwts: total=21556,11476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.993 00:26:48.993 Run status group 0 (all jobs): 00:26:48.993 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (353MB), run=2004-2004msec 00:26:48.993 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=179MiB (188MB), run=1784-1784msec 00:26:48.993 17:43:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.255 rmmod nvme_tcp 00:26:49.255 rmmod nvme_fabrics 00:26:49.255 rmmod nvme_keyring 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1198900 ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1198900 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1198900 ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1198900 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1198900 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1198900' 00:26:49.255 killing process with pid 1198900 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1198900 00:26:49.255 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1198900 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.515 17:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.421 17:43:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.421 00:26:51.421 real 0m15.611s 00:26:51.421 user 0m45.503s 00:26:51.421 sys 0m6.456s 00:26:51.421 17:43:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.421 17:43:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.421 ************************************ 00:26:51.421 END TEST nvmf_fio_host 00:26:51.421 ************************************ 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.680 ************************************ 00:26:51.680 START TEST nvmf_failover 00:26:51.680 ************************************ 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:51.680 * Looking for test storage... 00:26:51.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.680 --rc genhtml_branch_coverage=1 00:26:51.680 --rc genhtml_function_coverage=1 00:26:51.680 --rc genhtml_legend=1 00:26:51.680 --rc geninfo_all_blocks=1 00:26:51.680 --rc geninfo_unexecuted_blocks=1 00:26:51.680 00:26:51.680 ' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.680 --rc genhtml_branch_coverage=1 00:26:51.680 --rc genhtml_function_coverage=1 00:26:51.680 --rc genhtml_legend=1 00:26:51.680 --rc geninfo_all_blocks=1 00:26:51.680 --rc geninfo_unexecuted_blocks=1 00:26:51.680 00:26:51.680 ' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.680 --rc genhtml_branch_coverage=1 00:26:51.680 --rc genhtml_function_coverage=1 00:26:51.680 --rc genhtml_legend=1 00:26:51.680 --rc geninfo_all_blocks=1 00:26:51.680 --rc geninfo_unexecuted_blocks=1 00:26:51.680 00:26:51.680 ' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:51.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.680 --rc genhtml_branch_coverage=1 00:26:51.680 --rc genhtml_function_coverage=1 00:26:51.680 --rc genhtml_legend=1 00:26:51.680 --rc geninfo_all_blocks=1 00:26:51.680 --rc geninfo_unexecuted_blocks=1 00:26:51.680 00:26:51.680 ' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.680 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.681 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.940 17:43:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:58.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:58.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:58.510 Found net devices under 0000:86:00.0: cvl_0_0 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:58.510 Found net devices under 0000:86:00.1: cvl_0_1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:26:58.510 00:26:58.510 --- 10.0.0.2 ping statistics --- 00:26:58.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.510 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:26:58.510 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:58.511 00:26:58.511 --- 10.0.0.1 ping statistics --- 00:26:58.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.511 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1203821 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1203821 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1203821 ']' 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.511 17:43:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 [2024-10-14 17:43:56.841698] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:26:58.511 [2024-10-14 17:43:56.841741] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.511 [2024-10-14 17:43:56.914817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.511 [2024-10-14 17:43:56.956603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.511 [2024-10-14 17:43:56.956638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.511 [2024-10-14 17:43:56.956646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.511 [2024-10-14 17:43:56.956652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.511 [2024-10-14 17:43:56.956657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.511 [2024-10-14 17:43:56.958025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.511 [2024-10-14 17:43:56.958054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.511 [2024-10-14 17:43:56.958054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:58.511 [2024-10-14 17:43:57.270383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:58.511 Malloc0 00:26:58.511 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.770 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.027 17:43:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.027 [2024-10-14 17:43:58.073922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.027 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.285 [2024-10-14 17:43:58.262457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.285 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:59.544 [2024-10-14 17:43:58.455071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1204111 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1204111 /var/tmp/bdevperf.sock 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1204111 ']' 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.544 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:59.803 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.803 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:59.803 17:43:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:00.062 NVMe0n1 00:27:00.062 17:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:00.322 00:27:00.322 17:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.322 17:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1204312 00:27:00.322 17:43:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:01.259 17:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.517 [2024-10-14 17:44:00.437606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.517 [2024-10-14 17:44:00.437687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.518 [2024-10-14 17:44:00.437693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.518 [2024-10-14 17:44:00.437699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfa390 is same with the state(6) to be set 00:27:01.518 17:44:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:04.824 17:44:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:04.824 00:27:04.824 17:44:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:05.083 [2024-10-14 17:44:03.978082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.083 [2024-10-14 17:44:03.978280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 [2024-10-14 17:44:03.978490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfb190 is same with the state(6) to be set 00:27:05.084 17:44:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:08.370 17:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.370 [2024-10-14 17:44:07.205199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.370 17:44:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:09.305 17:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:09.305 [2024-10-14 17:44:08.414432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 [2024-10-14 17:44:08.414617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfbef0 is same with the state(6) to be set 00:27:09.305 17:44:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1204312 00:27:15.881 { 00:27:15.881 "results": [ 00:27:15.881 { 00:27:15.881 "job": "NVMe0n1", 00:27:15.881 "core_mask": "0x1", 00:27:15.881 "workload": "verify", 00:27:15.881 "status": "finished", 00:27:15.881 "verify_range": { 00:27:15.881 "start": 0, 00:27:15.881 "length": 16384 00:27:15.881 }, 00:27:15.881 "queue_depth": 128, 00:27:15.881 "io_size": 4096, 00:27:15.881 "runtime": 15.011029, 00:27:15.881 "iops": 11325.672610451955, 00:27:15.881 "mibps": 44.24090863457795, 00:27:15.881 "io_failed": 5989, 00:27:15.881 "io_timeout": 0, 00:27:15.881 "avg_latency_us": 10895.78699601919, 00:27:15.881 "min_latency_us": 401.79809523809524, 00:27:15.881 "max_latency_us": 12483.047619047618 00:27:15.881 } 00:27:15.881 ], 00:27:15.881 "core_count": 1 00:27:15.881 } 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1204111 ']' 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1204111' 00:27:15.881 killing process with pid 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1204111 00:27:15.881 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:15.881 [2024-10-14 17:43:58.519206] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:27:15.881 [2024-10-14 17:43:58.519260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204111 ] 00:27:15.881 [2024-10-14 17:43:58.587480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.881 [2024-10-14 17:43:58.629570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.881 Running I/O for 15 seconds... 00:27:15.881 11215.00 IOPS, 43.81 MiB/s [2024-10-14T15:44:15.019Z] [2024-10-14 17:44:00.439665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.881 [2024-10-14 17:44:00.439885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.439988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.439995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.881 [2024-10-14 17:44:00.440441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.881 [2024-10-14 17:44:00.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.440986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.440992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.882 [2024-10-14 17:44:00.441514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.882 [2024-10-14 17:44:00.441544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99912 len:8 PRP1 0x0 PRP2 0x0 00:27:15.882 [2024-10-14 17:44:00.441551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.882 [2024-10-14 17:44:00.441566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.882 [2024-10-14 17:44:00.441572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99920 len:8 PRP1 0x0 PRP2 0x0 00:27:15.882 [2024-10-14 17:44:00.441578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.882 [2024-10-14 17:44:00.441590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.882 [2024-10-14 17:44:00.441595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99928 len:8 PRP1 0x0 PRP2 0x0 00:27:15.882 [2024-10-14 17:44:00.441607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.882 [2024-10-14 17:44:00.441619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.882 [2024-10-14 17:44:00.441625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99936 len:8 PRP1 0x0 PRP2 0x0 00:27:15.882 [2024-10-14 17:44:00.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.882 [2024-10-14 17:44:00.441671] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21654e0 was disconnected and freed. reset controller. 00:27:15.883 [2024-10-14 17:44:00.441680] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:15.883 [2024-10-14 17:44:00.441702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.883 [2024-10-14 17:44:00.441710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:00.441717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.883 [2024-10-14 17:44:00.441723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:00.441730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.883 [2024-10-14 17:44:00.441737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:00.441744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.883 [2024-10-14 17:44:00.441750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:00.441757] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.883 [2024-10-14 17:44:00.441790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142400 (9): Bad file descriptor 00:27:15.883 [2024-10-14 17:44:00.444516] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.883 [2024-10-14 17:44:00.514789] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:15.883 10877.50 IOPS, 42.49 MiB/s [2024-10-14T15:44:15.021Z] 11093.67 IOPS, 43.33 MiB/s [2024-10-14T15:44:15.021Z] 11190.75 IOPS, 43.71 MiB/s [2024-10-14T15:44:15.021Z] [2024-10-14 17:44:03.979090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.883 [2024-10-14 17:44:03.979976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.883 [2024-10-14 17:44:03.979982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.979990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:03.979996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:03.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:03.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:03.980039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:03.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.884 [2024-10-14 17:44:03.980939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.980977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.884 [2024-10-14 17:44:03.980983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.884 [2024-10-14 17:44:03.980989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 PRP1 0x0 PRP2 0x0 00:27:15.884 [2024-10-14 17:44:03.980997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.981037] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x216e010 was disconnected and freed. reset controller. 00:27:15.884 [2024-10-14 17:44:03.981046] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:15.884 [2024-10-14 17:44:03.981065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.884 [2024-10-14 17:44:03.981072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.981079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.884 [2024-10-14 17:44:03.981086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.981093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.884 [2024-10-14 17:44:03.981100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.981106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.884 [2024-10-14 17:44:03.981112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:03.981119] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.884 [2024-10-14 17:44:03.981139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142400 (9): Bad file descriptor 00:27:15.884 [2024-10-14 17:44:03.983890] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.884 [2024-10-14 17:44:04.013475] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:15.884 11163.60 IOPS, 43.61 MiB/s [2024-10-14T15:44:15.022Z] 11239.67 IOPS, 43.90 MiB/s [2024-10-14T15:44:15.022Z] 11252.00 IOPS, 43.95 MiB/s [2024-10-14T15:44:15.022Z] 11310.12 IOPS, 44.18 MiB/s [2024-10-14T15:44:15.022Z] 11317.89 IOPS, 44.21 MiB/s [2024-10-14T15:44:15.022Z] [2024-10-14 17:44:08.415510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:08.415546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:08.415561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:08.415569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:08.415577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:08.415584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:08.415592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.884 [2024-10-14 17:44:08.415610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.884 [2024-10-14 17:44:08.415618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.415985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.415992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.885 [2024-10-14 17:44:08.416327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.885 [2024-10-14 17:44:08.416649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.885 [2024-10-14 17:44:08.416655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.416989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.416997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.886 [2024-10-14 17:44:08.417280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71384 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71392 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71400 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71408 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71416 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71424 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71432 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71440 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71448 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:15.886 [2024-10-14 17:44:08.417536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:15.886 [2024-10-14 17:44:08.417541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:27:15.886 [2024-10-14 17:44:08.417547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417586] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x216fa60 was disconnected and freed. reset controller. 00:27:15.886 [2024-10-14 17:44:08.417595] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:15.886 [2024-10-14 17:44:08.417619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.886 [2024-10-14 17:44:08.417627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.886 [2024-10-14 17:44:08.417641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.886 [2024-10-14 17:44:08.417654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.886 [2024-10-14 17:44:08.417667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.886 [2024-10-14 17:44:08.417673] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.886 [2024-10-14 17:44:08.417701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142400 (9): Bad file descriptor 00:27:15.886 [2024-10-14 17:44:08.420439] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.886 [2024-10-14 17:44:08.450060] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:15.886 11285.80 IOPS, 44.09 MiB/s [2024-10-14T15:44:15.024Z] 11295.36 IOPS, 44.12 MiB/s [2024-10-14T15:44:15.024Z] 11302.08 IOPS, 44.15 MiB/s [2024-10-14T15:44:15.024Z] 11304.46 IOPS, 44.16 MiB/s [2024-10-14T15:44:15.024Z] 11320.21 IOPS, 44.22 MiB/s [2024-10-14T15:44:15.024Z] 11325.60 IOPS, 44.24 MiB/s 00:27:15.886 Latency(us) 00:27:15.886 [2024-10-14T15:44:15.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.886 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:15.886 Verification LBA range: start 0x0 length 0x4000 00:27:15.886 NVMe0n1 : 15.01 11325.67 44.24 398.97 0.00 10895.79 401.80 12483.05 00:27:15.886 [2024-10-14T15:44:15.024Z] =================================================================================================================== 00:27:15.886 [2024-10-14T15:44:15.024Z] Total : 11325.67 44.24 398.97 0.00 10895.79 401.80 12483.05 00:27:15.886 Received shutdown signal, test time was about 15.000000 seconds 00:27:15.886 00:27:15.886 Latency(us) 00:27:15.886 [2024-10-14T15:44:15.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.886 [2024-10-14T15:44:15.024Z] =================================================================================================================== 00:27:15.886 [2024-10-14T15:44:15.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1206832 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1206832 /var/tmp/bdevperf.sock 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1206832 ']' 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:15.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:27:15.886 17:44:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:16.145 [2024-10-14 17:44:15.039454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:16.145 17:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:16.145 [2024-10-14 17:44:15.219980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:16.145 17:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:16.405 NVMe0n1 00:27:16.405 17:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:16.663 00:27:16.663 17:44:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:17.231 00:27:17.231 17:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:17.231 17:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:17.489 17:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:17.489 17:44:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:20.777 17:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.777 17:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:20.777 17:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:20.777 17:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1207568 00:27:20.777 17:44:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1207568 00:27:22.154 { 00:27:22.154 "results": [ 00:27:22.154 { 00:27:22.154 "job": "NVMe0n1", 00:27:22.154 "core_mask": "0x1", 00:27:22.154 "workload": "verify", 00:27:22.154 "status": "finished", 00:27:22.154 "verify_range": { 00:27:22.154 "start": 0, 00:27:22.154 "length": 16384 00:27:22.154 }, 00:27:22.154 "queue_depth": 128, 00:27:22.154 "io_size": 4096, 00:27:22.154 "runtime": 1.04357, 00:27:22.154 "iops": 10858.87865691808, 00:27:22.154 "mibps": 42.41749475358625, 00:27:22.154 "io_failed": 0, 00:27:22.154 "io_timeout": 0, 00:27:22.154 "avg_latency_us": 11302.196423780948, 00:27:22.154 "min_latency_us": 2090.9104761904764, 00:27:22.154 "max_latency_us": 42192.700952380954 00:27:22.154 } 00:27:22.154 ], 00:27:22.154 "core_count": 1 00:27:22.154 } 00:27:22.154 17:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.154 [2024-10-14 17:44:14.669167] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:27:22.154 [2024-10-14 17:44:14.669229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206832 ] 00:27:22.154 [2024-10-14 17:44:14.739136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.154 [2024-10-14 17:44:14.776287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.154 [2024-10-14 17:44:16.567901] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:22.154 [2024-10-14 17:44:16.567947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.154 [2024-10-14 17:44:16.567957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.154 [2024-10-14 17:44:16.567966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.154 [2024-10-14 17:44:16.567973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.154 [2024-10-14 17:44:16.567981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.154 [2024-10-14 17:44:16.567988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.154 [2024-10-14 17:44:16.567995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.154 [2024-10-14 17:44:16.568001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.154 [2024-10-14 17:44:16.568008] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.154 [2024-10-14 17:44:16.568032] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.154 [2024-10-14 17:44:16.568045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x781400 (9): Bad file descriptor 00:27:22.154 [2024-10-14 17:44:16.658910] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:22.154 Running I/O for 1 seconds... 00:27:22.154 11204.00 IOPS, 43.77 MiB/s 00:27:22.154 Latency(us) 00:27:22.154 [2024-10-14T15:44:21.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.154 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:22.154 Verification LBA range: start 0x0 length 0x4000 00:27:22.154 NVMe0n1 : 1.04 10858.88 42.42 0.00 0.00 11302.20 2090.91 42192.70 00:27:22.154 [2024-10-14T15:44:21.292Z] =================================================================================================================== 00:27:22.154 [2024-10-14T15:44:21.292Z] Total : 10858.88 42.42 0.00 0.00 11302.20 2090.91 42192.70 00:27:22.154 17:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:22.154 17:44:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:22.154 17:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:22.413 17:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:22.413 17:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:22.671 17:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:22.671 17:44:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1206832 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1206832 ']' 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1206832 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.081 17:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1206832 00:27:26.081 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:26.081 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:26.082 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1206832' 00:27:26.082 killing process with pid 1206832 00:27:26.082 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1206832 00:27:26.082 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1206832 00:27:26.082 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:26.082 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.340 rmmod nvme_tcp 00:27:26.340 rmmod nvme_fabrics 00:27:26.340 rmmod nvme_keyring 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1203821 ']' 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1203821 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1203821 ']' 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1203821 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:26.340 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1203821 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1203821' 00:27:26.600 killing process with pid 1203821 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1203821 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1203821 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.600 17:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.136 00:27:29.136 real 0m37.132s 00:27:29.136 user 1m57.194s 00:27:29.136 sys 0m7.904s 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:29.136 ************************************ 00:27:29.136 END TEST nvmf_failover 00:27:29.136 ************************************ 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.136 ************************************ 00:27:29.136 START TEST nvmf_host_discovery 00:27:29.136 ************************************ 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:29.136 * Looking for test storage... 00:27:29.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:29.136 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:29.137 17:44:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:29.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.137 --rc genhtml_branch_coverage=1 00:27:29.137 --rc genhtml_function_coverage=1 00:27:29.137 --rc genhtml_legend=1 00:27:29.137 --rc geninfo_all_blocks=1 00:27:29.137 --rc geninfo_unexecuted_blocks=1 00:27:29.137 00:27:29.137 ' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:29.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.137 --rc genhtml_branch_coverage=1 00:27:29.137 --rc genhtml_function_coverage=1 00:27:29.137 --rc genhtml_legend=1 00:27:29.137 --rc geninfo_all_blocks=1 00:27:29.137 --rc geninfo_unexecuted_blocks=1 00:27:29.137 00:27:29.137 ' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:29.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.137 --rc genhtml_branch_coverage=1 00:27:29.137 --rc genhtml_function_coverage=1 00:27:29.137 --rc genhtml_legend=1 00:27:29.137 --rc geninfo_all_blocks=1 00:27:29.137 --rc geninfo_unexecuted_blocks=1 00:27:29.137 00:27:29.137 ' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:29.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.137 --rc genhtml_branch_coverage=1 00:27:29.137 --rc genhtml_function_coverage=1 00:27:29.137 --rc genhtml_legend=1 00:27:29.137 --rc geninfo_all_blocks=1 00:27:29.137 --rc geninfo_unexecuted_blocks=1 00:27:29.137 00:27:29.137 ' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:29.137 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.138 17:44:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:35.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:35.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:35.711 Found net devices under 0000:86:00.0: cvl_0_0 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:35.711 Found net devices under 0000:86:00.1: cvl_0_1 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:35.711 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:35.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:27:35.712 00:27:35.712 --- 10.0.0.2 ping statistics --- 00:27:35.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.712 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:27:35.712 00:27:35.712 --- 10.0.0.1 ping statistics --- 00:27:35.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.712 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1212013 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1212013 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1212013 ']' 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.712 17:44:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 [2024-10-14 17:44:34.045173] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:27:35.712 [2024-10-14 17:44:34.045218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.712 [2024-10-14 17:44:34.117629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.712 [2024-10-14 17:44:34.161535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.712 [2024-10-14 17:44:34.161563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.712 [2024-10-14 17:44:34.161571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.712 [2024-10-14 17:44:34.161578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.712 [2024-10-14 17:44:34.161583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.712 [2024-10-14 17:44:34.162124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 [2024-10-14 17:44:34.308916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 [2024-10-14 17:44:34.321110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 null0 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 null1 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1212149 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1212149 /tmp/host.sock 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1212149 ']' 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:35.712 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 [2024-10-14 17:44:34.397572] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:27:35.712 [2024-10-14 17:44:34.397620] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212149 ] 00:27:35.712 [2024-10-14 17:44:34.462220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.712 [2024-10-14 17:44:34.505581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:35.712 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:35.713 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.972 [2024-10-14 17:44:34.918645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.972 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:35.973 17:44:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.973 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.232 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:27:36.232 17:44:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:36.800 [2024-10-14 17:44:35.677761] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:36.800 [2024-10-14 17:44:35.677783] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:36.800 [2024-10-14 17:44:35.677796] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:36.800 [2024-10-14 17:44:35.764049] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:36.800 [2024-10-14 17:44:35.868777] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:36.800 [2024-10-14 17:44:35.868795] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:37.059 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:37.318 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.319 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:37.577 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:37.578 17:44:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.514 [2024-10-14 17:44:37.646165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:38.514 [2024-10-14 17:44:37.647180] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:38.514 [2024-10-14 17:44:37.647206] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:38.514 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.773 [2024-10-14 17:44:37.774580] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:38.773 17:44:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:27:38.773 [2024-10-14 17:44:37.835345] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:38.773 [2024-10-14 17:44:37.835363] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:38.773 [2024-10-14 17:44:37.835368] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:39.708 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.977 [2024-10-14 17:44:38.910207] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:39.977 [2024-10-14 17:44:38.910230] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:39.977 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.977 [2024-10-14 17:44:38.915541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.977 [2024-10-14 17:44:38.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.977 [2024-10-14 17:44:38.915568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.977 [2024-10-14 17:44:38.915575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.977 [2024-10-14 17:44:38.915583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.977 [2024-10-14 17:44:38.915589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.978 [2024-10-14 17:44:38.915597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.978 [2024-10-14 17:44:38.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.978 [2024-10-14 17:44:38.915615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:39.978 [2024-10-14 17:44:38.925556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.935593] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.935874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.935888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.935897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.935908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.935918] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.935924] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.935932] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.935942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.978 [2024-10-14 17:44:38.945651] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.945833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.945844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.945851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.945861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.945870] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.945876] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.945882] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.945891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:39.978 [2024-10-14 17:44:38.955702] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.955875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.955888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.955895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.955906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.955915] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.955922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.955929] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.955939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:39.978 [2024-10-14 17:44:38.965759] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.965972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.965986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.965994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.966005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.966014] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.966020] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.966027] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.966036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 [2024-10-14 17:44:38.975813] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.975904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.975916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.975924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.975934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.975944] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.975954] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.975961] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.975970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 [2024-10-14 17:44:38.985862] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.986030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.986041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.978 [2024-10-14 17:44:38.986048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.978 [2024-10-14 17:44:38.986057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.978 [2024-10-14 17:44:38.986066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.978 [2024-10-14 17:44:38.986072] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.978 [2024-10-14 17:44:38.986079] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.978 [2024-10-14 17:44:38.986088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.978 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.978 [2024-10-14 17:44:38.995910] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.978 [2024-10-14 17:44:38.996013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.978 [2024-10-14 17:44:38.996024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7450 with addr=10.0.0.2, port=4420 00:27:39.979 [2024-10-14 17:44:38.996030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7450 is same with the state(6) to be set 00:27:39.979 [2024-10-14 17:44:38.996039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7450 (9): Bad file descriptor 00:27:39.979 [2024-10-14 17:44:38.996048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.979 [2024-10-14 17:44:38.996054] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:39.979 [2024-10-14 17:44:38.996060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.979 [2024-10-14 17:44:38.996069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.979 [2024-10-14 17:44:38.996472] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:39.979 [2024-10-14 17:44:38.996486] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:39.979 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:39.979 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.979 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:39.979 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:39.979 17:44:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:39.979 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.238 17:44:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 [2024-10-14 17:44:40.318721] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:41.617 [2024-10-14 17:44:40.318740] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:41.617 [2024-10-14 17:44:40.318752] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:41.617 [2024-10-14 17:44:40.405013] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:41.617 [2024-10-14 17:44:40.587160] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:41.617 [2024-10-14 17:44:40.587188] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 request: 00:27:41.617 { 00:27:41.617 "name": "nvme", 00:27:41.617 "trtype": "tcp", 00:27:41.617 "traddr": "10.0.0.2", 00:27:41.617 "adrfam": "ipv4", 00:27:41.617 "trsvcid": "8009", 00:27:41.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:41.617 "wait_for_attach": true, 00:27:41.617 "method": "bdev_nvme_start_discovery", 00:27:41.617 "req_id": 1 00:27:41.617 } 00:27:41.617 Got JSON-RPC error response 00:27:41.617 response: 00:27:41.617 { 00:27:41.617 "code": -17, 00:27:41.617 "message": "File exists" 00:27:41.617 } 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 request: 00:27:41.617 { 00:27:41.617 "name": "nvme_second", 00:27:41.617 "trtype": "tcp", 00:27:41.617 "traddr": "10.0.0.2", 00:27:41.617 "adrfam": "ipv4", 00:27:41.617 "trsvcid": "8009", 00:27:41.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:41.617 "wait_for_attach": true, 00:27:41.617 "method": "bdev_nvme_start_discovery", 00:27:41.617 "req_id": 1 00:27:41.617 } 00:27:41.617 Got JSON-RPC error response 00:27:41.617 response: 00:27:41.617 { 00:27:41.617 "code": -17, 00:27:41.617 "message": "File exists" 00:27:41.617 } 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:41.617 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.876 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:41.877 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.877 17:44:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.813 [2024-10-14 17:44:41.818499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.813 [2024-10-14 17:44:41.818527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d4890 with addr=10.0.0.2, port=8010 00:27:42.813 [2024-10-14 17:44:41.818541] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:42.813 [2024-10-14 17:44:41.818547] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:42.813 [2024-10-14 17:44:41.818553] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:43.748 [2024-10-14 17:44:42.820999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:43.748 [2024-10-14 17:44:42.821023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d4890 with addr=10.0.0.2, port=8010 00:27:43.748 [2024-10-14 17:44:42.821034] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:43.748 [2024-10-14 17:44:42.821040] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:43.748 [2024-10-14 17:44:42.821050] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:44.683 [2024-10-14 17:44:43.823203] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:44.683 request: 00:27:44.683 { 00:27:44.942 "name": "nvme_second", 00:27:44.942 "trtype": "tcp", 00:27:44.942 "traddr": "10.0.0.2", 00:27:44.942 "adrfam": "ipv4", 00:27:44.942 "trsvcid": "8010", 00:27:44.942 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:44.942 "wait_for_attach": false, 00:27:44.942 "attach_timeout_ms": 3000, 00:27:44.942 "method": "bdev_nvme_start_discovery", 00:27:44.942 "req_id": 1 00:27:44.942 } 00:27:44.942 Got JSON-RPC error response 00:27:44.942 response: 00:27:44.942 { 00:27:44.942 "code": -110, 00:27:44.942 "message": "Connection timed out" 00:27:44.942 } 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1212149 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.942 rmmod nvme_tcp 00:27:44.942 rmmod nvme_fabrics 00:27:44.942 rmmod nvme_keyring 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1212013 ']' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1212013 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1212013 ']' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1212013 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1212013 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:44.942 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1212013' 00:27:44.943 killing process with pid 1212013 00:27:44.943 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1212013 00:27:44.943 17:44:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1212013 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.202 17:44:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.107 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:47.107 00:27:47.107 real 0m18.392s 00:27:47.107 user 0m22.706s 00:27:47.107 sys 0m5.962s 00:27:47.107 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.107 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:47.107 ************************************ 00:27:47.107 END TEST nvmf_host_discovery 00:27:47.107 ************************************ 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.368 ************************************ 00:27:47.368 START TEST nvmf_host_multipath_status 00:27:47.368 ************************************ 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:47.368 * Looking for test storage... 00:27:47.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:47.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.368 --rc genhtml_branch_coverage=1 00:27:47.368 --rc genhtml_function_coverage=1 00:27:47.368 --rc genhtml_legend=1 00:27:47.368 --rc geninfo_all_blocks=1 00:27:47.368 --rc geninfo_unexecuted_blocks=1 00:27:47.368 00:27:47.368 ' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:47.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.368 --rc genhtml_branch_coverage=1 00:27:47.368 --rc genhtml_function_coverage=1 00:27:47.368 --rc genhtml_legend=1 00:27:47.368 --rc geninfo_all_blocks=1 00:27:47.368 --rc geninfo_unexecuted_blocks=1 00:27:47.368 00:27:47.368 ' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:47.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.368 --rc genhtml_branch_coverage=1 00:27:47.368 --rc genhtml_function_coverage=1 00:27:47.368 --rc genhtml_legend=1 00:27:47.368 --rc geninfo_all_blocks=1 00:27:47.368 --rc geninfo_unexecuted_blocks=1 00:27:47.368 00:27:47.368 ' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:47.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.368 --rc genhtml_branch_coverage=1 00:27:47.368 --rc genhtml_function_coverage=1 00:27:47.368 --rc genhtml_legend=1 00:27:47.368 --rc geninfo_all_blocks=1 00:27:47.368 --rc geninfo_unexecuted_blocks=1 00:27:47.368 00:27:47.368 ' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.368 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:47.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:47.369 17:44:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.941 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:53.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:53.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:53.942 Found net devices under 0000:86:00.0: cvl_0_0 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:53.942 Found net devices under 0000:86:00.1: cvl_0_1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:27:53.942 00:27:53.942 --- 10.0.0.2 ping statistics --- 00:27:53.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.942 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:27:53.942 00:27:53.942 --- 10.0.0.1 ping statistics --- 00:27:53.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.942 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.942 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1217335 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1217335 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1217335 ']' 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 [2024-10-14 17:44:52.458210] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:27:53.943 [2024-10-14 17:44:52.458253] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.943 [2024-10-14 17:44:52.531755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:53.943 [2024-10-14 17:44:52.572780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.943 [2024-10-14 17:44:52.572819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.943 [2024-10-14 17:44:52.572826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.943 [2024-10-14 17:44:52.572832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.943 [2024-10-14 17:44:52.572837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.943 [2024-10-14 17:44:52.574011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.943 [2024-10-14 17:44:52.574014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1217335 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:53.943 [2024-10-14 17:44:52.877026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.943 17:44:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:54.202 Malloc0 00:27:54.202 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:54.202 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.461 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.720 [2024-10-14 17:44:53.684447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.720 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:54.979 [2024-10-14 17:44:53.864900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1217582 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1217582 /var/tmp/bdevperf.sock 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1217582 ']' 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.979 17:44:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:55.237 17:44:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.237 17:44:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:55.237 17:44:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:55.237 17:44:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:55.805 Nvme0n1 00:27:55.805 17:44:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:56.372 Nvme0n1 00:27:56.372 17:44:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:56.372 17:44:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:58.276 17:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:58.276 17:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:58.534 17:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:58.793 17:44:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:59.729 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:59.729 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:59.729 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.729 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.988 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.988 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:59.988 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.988 17:44:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.988 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:59.988 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.988 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.988 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:00.247 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.247 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:00.247 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.247 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.506 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.506 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:00.506 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.506 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:00.765 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.765 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:00.765 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.765 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.024 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.024 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:01.024 17:44:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:01.024 17:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:01.283 17:45:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:02.219 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:02.219 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:02.219 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.219 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:02.478 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.478 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:02.478 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:02.478 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.736 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.736 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:02.736 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.736 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:02.994 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.994 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:02.995 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.995 17:45:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.253 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:03.511 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.511 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:03.511 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:03.770 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:04.041 17:45:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:04.975 17:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:04.975 17:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:04.975 17:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.975 17:45:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:05.234 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.234 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:05.234 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.234 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.493 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:05.752 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.752 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:05.752 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.752 17:45:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:06.011 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.011 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:06.011 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:06.011 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:06.270 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:06.270 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:06.270 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:06.529 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:06.529 17:45:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.907 17:45:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.166 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:08.424 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.424 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:08.424 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.424 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:08.683 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:08.683 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:08.683 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:08.683 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:08.942 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:08.942 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:08.942 17:45:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:09.200 17:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:09.200 17:45:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:10.578 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.838 17:45:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:11.096 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:11.096 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:11.096 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.096 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:11.355 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.355 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:11.355 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.355 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:11.614 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:11.614 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:11.614 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:11.614 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:11.873 17:45:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:12.810 17:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:12.810 17:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:12.810 17:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.810 17:45:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:13.069 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:13.069 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:13.069 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.069 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:13.328 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.328 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:13.328 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.328 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:13.587 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.587 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:13.587 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.587 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:13.846 17:45:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:14.105 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.105 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:14.363 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:14.363 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:14.622 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:14.623 17:45:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:16.001 17:45:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.260 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:16.519 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.519 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:16.519 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.519 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:16.779 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:16.779 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:16.779 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:16.779 17:45:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:17.038 17:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:17.038 17:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:17.038 17:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:17.297 17:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:17.556 17:45:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:18.494 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:18.494 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:18.494 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.494 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:18.754 17:45:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:19.012 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.012 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:19.012 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.012 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:19.272 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.272 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:19.272 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.272 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:19.532 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.532 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:19.532 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:19.532 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:19.792 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:19.792 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:19.792 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:19.792 17:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:20.052 17:45:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:20.990 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:20.990 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:20.990 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:20.990 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:21.250 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.250 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:21.250 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.250 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:21.510 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.510 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:21.510 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.510 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:21.771 17:45:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:22.031 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.031 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:22.031 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:22.031 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:22.291 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:22.291 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:22.291 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:22.551 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:22.811 17:45:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:23.750 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:23.750 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:23.750 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:23.750 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:24.011 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.011 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:24.011 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.011 17:45:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:24.011 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:24.011 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:24.011 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.011 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:24.270 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.270 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:24.270 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.270 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:24.530 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.530 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:24.530 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:24.530 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:24.790 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:24.790 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:24.790 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:24.790 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1217582 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1217582 ']' 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1217582 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.060 17:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217582 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217582' 00:28:25.060 killing process with pid 1217582 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1217582 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1217582 00:28:25.060 { 00:28:25.060 "results": [ 00:28:25.060 { 00:28:25.060 "job": "Nvme0n1", 00:28:25.060 "core_mask": "0x4", 00:28:25.060 "workload": "verify", 00:28:25.060 "status": "terminated", 00:28:25.060 "verify_range": { 00:28:25.060 "start": 0, 00:28:25.060 "length": 16384 00:28:25.060 }, 00:28:25.060 "queue_depth": 128, 00:28:25.060 "io_size": 4096, 00:28:25.060 "runtime": 28.620694, 00:28:25.060 "iops": 10690.02729283923, 00:28:25.060 "mibps": 41.75791911265324, 00:28:25.060 "io_failed": 0, 00:28:25.060 "io_timeout": 0, 00:28:25.060 "avg_latency_us": 11954.654492566313, 00:28:25.060 "min_latency_us": 419.35238095238094, 00:28:25.060 "max_latency_us": 3019898.88 00:28:25.060 } 00:28:25.060 ], 00:28:25.060 "core_count": 1 00:28:25.060 } 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1217582 00:28:25.060 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:25.060 [2024-10-14 17:44:53.939722] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:28:25.060 [2024-10-14 17:44:53.939775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217582 ] 00:28:25.060 [2024-10-14 17:44:54.007827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.060 [2024-10-14 17:44:54.048887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.060 Running I/O for 90 seconds... 00:28:25.060 11412.00 IOPS, 44.58 MiB/s [2024-10-14T15:45:24.198Z] 11522.00 IOPS, 45.01 MiB/s [2024-10-14T15:45:24.198Z] 11578.33 IOPS, 45.23 MiB/s [2024-10-14T15:45:24.198Z] 11607.25 IOPS, 45.34 MiB/s [2024-10-14T15:45:24.198Z] 11586.40 IOPS, 45.26 MiB/s [2024-10-14T15:45:24.198Z] 11568.67 IOPS, 45.19 MiB/s [2024-10-14T15:45:24.198Z] 11564.29 IOPS, 45.17 MiB/s [2024-10-14T15:45:24.198Z] 11552.00 IOPS, 45.12 MiB/s [2024-10-14T15:45:24.198Z] 11532.78 IOPS, 45.05 MiB/s [2024-10-14T15:45:24.198Z] 11531.40 IOPS, 45.04 MiB/s [2024-10-14T15:45:24.198Z] 11514.45 IOPS, 44.98 MiB/s [2024-10-14T15:45:24.198Z] 11513.08 IOPS, 44.97 MiB/s [2024-10-14T15:45:24.198Z] [2024-10-14 17:45:08.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.060 [2024-10-14 17:45:08.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.060 [2024-10-14 17:45:08.070205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.060 [2024-10-14 17:45:08.070226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.060 [2024-10-14 17:45:08.070245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.060 [2024-10-14 17:45:08.070265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.060 [2024-10-14 17:45:08.070284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:25.060 [2024-10-14 17:45:08.070296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:25.061 [2024-10-14 17:45:08.070976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.061 [2024-10-14 17:45:08.070983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.070996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.071594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.072713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.072738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.072745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.072760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.072767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.072782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.062 [2024-10-14 17:45:08.072789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.062 [2024-10-14 17:45:08.072805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.062 [2024-10-14 17:45:08.072811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.072990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.072997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.063 [2024-10-14 17:45:08.073381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.063 [2024-10-14 17:45:08.073404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.063 [2024-10-14 17:45:08.073426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.063 [2024-10-14 17:45:08.073448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.063 [2024-10-14 17:45:08.073557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.063 [2024-10-14 17:45:08.073564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.073983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.073999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.064 [2024-10-14 17:45:08.074211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.064 [2024-10-14 17:45:08.074218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.064 11272.23 IOPS, 44.03 MiB/s [2024-10-14T15:45:24.202Z] 10467.07 IOPS, 40.89 MiB/s [2024-10-14T15:45:24.202Z] 9769.27 IOPS, 38.16 MiB/s [2024-10-14T15:45:24.202Z] 9366.81 IOPS, 36.59 MiB/s [2024-10-14T15:45:24.202Z] 9498.06 IOPS, 37.10 MiB/s [2024-10-14T15:45:24.202Z] 9609.44 IOPS, 37.54 MiB/s [2024-10-14T15:45:24.202Z] 9798.32 IOPS, 38.27 MiB/s [2024-10-14T15:45:24.202Z] 9989.15 IOPS, 39.02 MiB/s [2024-10-14T15:45:24.202Z] 10153.48 IOPS, 39.66 MiB/s [2024-10-14T15:45:24.202Z] 10215.68 IOPS, 39.91 MiB/s [2024-10-14T15:45:24.202Z] 10274.48 IOPS, 40.13 MiB/s [2024-10-14T15:45:24.202Z] 10345.38 IOPS, 40.41 MiB/s [2024-10-14T15:45:24.202Z] 10461.52 IOPS, 40.87 MiB/s [2024-10-14T15:45:24.202Z] 10582.00 IOPS, 41.34 MiB/s [2024-10-14T15:45:24.202Z] [2024-10-14 17:45:21.690824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.690862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.690904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.690930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.690967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.690986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.690999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.691138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.691159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.691466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.065 [2024-10-14 17:45:21.691485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.065 [2024-10-14 17:45:21.691497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.065 [2024-10-14 17:45:21.691504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.691516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.691525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.691537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.691544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.691556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.691983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.066 [2024-10-14 17:45:21.691995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.066 [2024-10-14 17:45:21.692018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.066 [2024-10-14 17:45:21.692038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.066 [2024-10-14 17:45:21.692311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.066 [2024-10-14 17:45:21.692574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:25.066 [2024-10-14 17:45:21.692587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.067 [2024-10-14 17:45:21.692593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:25.067 10638.85 IOPS, 41.56 MiB/s [2024-10-14T15:45:24.205Z] 10674.07 IOPS, 41.70 MiB/s [2024-10-14T15:45:24.205Z] Received shutdown signal, test time was about 28.621361 seconds 00:28:25.067 00:28:25.067 Latency(us) 00:28:25.067 [2024-10-14T15:45:24.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.067 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:25.067 Verification LBA range: start 0x0 length 0x4000 00:28:25.067 Nvme0n1 : 28.62 10690.03 41.76 0.00 0.00 11954.65 419.35 3019898.88 00:28:25.067 [2024-10-14T15:45:24.205Z] =================================================================================================================== 00:28:25.067 [2024-10-14T15:45:24.205Z] Total : 10690.03 41.76 0.00 0.00 11954.65 419.35 3019898.88 00:28:25.067 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.326 rmmod nvme_tcp 00:28:25.326 rmmod nvme_fabrics 00:28:25.326 rmmod nvme_keyring 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:25.326 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1217335 ']' 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1217335 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1217335 ']' 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1217335 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.327 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217335 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217335' 00:28:25.587 killing process with pid 1217335 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1217335 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1217335 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.587 17:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.130 00:28:28.130 real 0m40.475s 00:28:28.130 user 1m49.712s 00:28:28.130 sys 0m11.398s 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:28.130 ************************************ 00:28:28.130 END TEST nvmf_host_multipath_status 00:28:28.130 ************************************ 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.130 ************************************ 00:28:28.130 START TEST nvmf_discovery_remove_ifc 00:28:28.130 ************************************ 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:28.130 * Looking for test storage... 00:28:28.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.130 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:28.131 17:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.131 --rc genhtml_branch_coverage=1 00:28:28.131 --rc genhtml_function_coverage=1 00:28:28.131 --rc genhtml_legend=1 00:28:28.131 --rc geninfo_all_blocks=1 00:28:28.131 --rc geninfo_unexecuted_blocks=1 00:28:28.131 00:28:28.131 ' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.131 --rc genhtml_branch_coverage=1 00:28:28.131 --rc genhtml_function_coverage=1 00:28:28.131 --rc genhtml_legend=1 00:28:28.131 --rc geninfo_all_blocks=1 00:28:28.131 --rc geninfo_unexecuted_blocks=1 00:28:28.131 00:28:28.131 ' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.131 --rc genhtml_branch_coverage=1 00:28:28.131 --rc genhtml_function_coverage=1 00:28:28.131 --rc genhtml_legend=1 00:28:28.131 --rc geninfo_all_blocks=1 00:28:28.131 --rc geninfo_unexecuted_blocks=1 00:28:28.131 00:28:28.131 ' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:28.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.131 --rc genhtml_branch_coverage=1 00:28:28.131 --rc genhtml_function_coverage=1 00:28:28.131 --rc genhtml_legend=1 00:28:28.131 --rc geninfo_all_blocks=1 00:28:28.131 --rc geninfo_unexecuted_blocks=1 00:28:28.131 00:28:28.131 ' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:28.131 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.132 17:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:34.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:34.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:34.715 Found net devices under 0000:86:00.0: cvl_0_0 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:34.715 Found net devices under 0000:86:00.1: cvl_0_1 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.715 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:28:34.716 00:28:34.716 --- 10.0.0.2 ping statistics --- 00:28:34.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.716 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:28:34.716 00:28:34.716 --- 10.0.0.1 ping statistics --- 00:28:34.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.716 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:34.716 17:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1226810 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1226810 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1226810 ']' 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 [2024-10-14 17:45:33.058666] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:28:34.716 [2024-10-14 17:45:33.058710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.716 [2024-10-14 17:45:33.130640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.716 [2024-10-14 17:45:33.171119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.716 [2024-10-14 17:45:33.171153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.716 [2024-10-14 17:45:33.171160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.716 [2024-10-14 17:45:33.171166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.716 [2024-10-14 17:45:33.171174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.716 [2024-10-14 17:45:33.171733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 [2024-10-14 17:45:33.308774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.716 [2024-10-14 17:45:33.316935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:34.716 null0 00:28:34.716 [2024-10-14 17:45:33.348936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1226872 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1226872 /tmp/host.sock 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1226872 ']' 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:34.716 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 [2024-10-14 17:45:33.417990] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:28:34.716 [2024-10-14 17:45:33.418030] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226872 ] 00:28:34.716 [2024-10-14 17:45:33.484635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.716 [2024-10-14 17:45:33.526502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.716 17:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.658 [2024-10-14 17:45:34.706104] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:35.658 [2024-10-14 17:45:34.706124] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:35.658 [2024-10-14 17:45:34.706138] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.919 [2024-10-14 17:45:34.832540] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:35.919 [2024-10-14 17:45:35.009468] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:35.919 [2024-10-14 17:45:35.009509] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:35.919 [2024-10-14 17:45:35.009528] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:35.919 [2024-10-14 17:45:35.009540] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:35.919 [2024-10-14 17:45:35.009556] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.919 [2024-10-14 17:45:35.014467] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x182ea60 was disconnected and freed. delete nvme_qpair. 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:35.919 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.210 17:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:37.203 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.203 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.203 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.203 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.204 17:45:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.139 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.398 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.398 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.398 17:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.333 17:45:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.269 17:45:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.646 [2024-10-14 17:45:40.451240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:41.646 [2024-10-14 17:45:40.451278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.646 [2024-10-14 17:45:40.451288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.646 [2024-10-14 17:45:40.451297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.646 [2024-10-14 17:45:40.451304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.646 [2024-10-14 17:45:40.451311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.646 [2024-10-14 17:45:40.451318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.646 [2024-10-14 17:45:40.451325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.646 [2024-10-14 17:45:40.451331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.646 [2024-10-14 17:45:40.451338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.646 [2024-10-14 17:45:40.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.646 [2024-10-14 17:45:40.451355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b2e0 is same with the state(6) to be set 00:28:41.646 [2024-10-14 17:45:40.461263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180b2e0 (9): Bad file descriptor 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:41.646 17:45:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:41.646 [2024-10-14 17:45:40.471300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:42.583 [2024-10-14 17:45:41.533633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:42.583 [2024-10-14 17:45:41.533706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x180b2e0 with addr=10.0.0.2, port=4420 00:28:42.583 [2024-10-14 17:45:41.533737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b2e0 is same with the state(6) to be set 00:28:42.583 [2024-10-14 17:45:41.533792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180b2e0 (9): Bad file descriptor 00:28:42.583 [2024-10-14 17:45:41.534737] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:42.583 [2024-10-14 17:45:41.534801] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.583 [2024-10-14 17:45:41.534823] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:42.583 [2024-10-14 17:45:41.534845] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.583 [2024-10-14 17:45:41.534907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.583 [2024-10-14 17:45:41.534933] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:42.583 17:45:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.520 [2024-10-14 17:45:42.537424] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:43.520 [2024-10-14 17:45:42.537445] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:43.520 [2024-10-14 17:45:42.537451] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:43.520 [2024-10-14 17:45:42.537458] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:43.520 [2024-10-14 17:45:42.537470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.520 [2024-10-14 17:45:42.537485] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:43.520 [2024-10-14 17:45:42.537504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.520 [2024-10-14 17:45:42.537518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.520 [2024-10-14 17:45:42.537527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.520 [2024-10-14 17:45:42.537534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.520 [2024-10-14 17:45:42.537541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.520 [2024-10-14 17:45:42.537547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.520 [2024-10-14 17:45:42.537553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.520 [2024-10-14 17:45:42.537559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.520 [2024-10-14 17:45:42.537566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:43.520 [2024-10-14 17:45:42.537573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.520 [2024-10-14 17:45:42.537579] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:43.520 [2024-10-14 17:45:42.538031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa9c0 (9): Bad file descriptor 00:28:43.520 [2024-10-14 17:45:42.539041] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:43.520 [2024-10-14 17:45:42.539051] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.520 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:43.779 17:45:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:44.715 17:45:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:45.652 [2024-10-14 17:45:44.591064] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:45.652 [2024-10-14 17:45:44.591080] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:45.652 [2024-10-14 17:45:44.591091] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:45.652 [2024-10-14 17:45:44.718478] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:45.652 [2024-10-14 17:45:44.781747] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:45.652 [2024-10-14 17:45:44.781779] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:45.652 [2024-10-14 17:45:44.781796] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:45.652 [2024-10-14 17:45:44.781808] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:45.652 [2024-10-14 17:45:44.781815] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:45.652 [2024-10-14 17:45:44.789446] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18069f0 was disconnected and freed. delete nvme_qpair. 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1226872 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1226872 ']' 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1226872 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1226872 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1226872' 00:28:45.911 killing process with pid 1226872 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1226872 00:28:45.911 17:45:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1226872 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.911 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.911 rmmod nvme_tcp 00:28:46.171 rmmod nvme_fabrics 00:28:46.171 rmmod nvme_keyring 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1226810 ']' 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1226810 ']' 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1226810' 00:28:46.171 killing process with pid 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1226810 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:46.171 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.429 17:45:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.334 00:28:48.334 real 0m20.556s 00:28:48.334 user 0m24.802s 00:28:48.334 sys 0m5.836s 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:48.334 ************************************ 00:28:48.334 END TEST nvmf_discovery_remove_ifc 00:28:48.334 ************************************ 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.334 ************************************ 00:28:48.334 START TEST nvmf_identify_kernel_target 00:28:48.334 ************************************ 00:28:48.334 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:48.594 * Looking for test storage... 00:28:48.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:48.594 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:48.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.595 --rc genhtml_branch_coverage=1 00:28:48.595 --rc genhtml_function_coverage=1 00:28:48.595 --rc genhtml_legend=1 00:28:48.595 --rc geninfo_all_blocks=1 00:28:48.595 --rc geninfo_unexecuted_blocks=1 00:28:48.595 00:28:48.595 ' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:48.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.595 --rc genhtml_branch_coverage=1 00:28:48.595 --rc genhtml_function_coverage=1 00:28:48.595 --rc genhtml_legend=1 00:28:48.595 --rc geninfo_all_blocks=1 00:28:48.595 --rc geninfo_unexecuted_blocks=1 00:28:48.595 00:28:48.595 ' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:48.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.595 --rc genhtml_branch_coverage=1 00:28:48.595 --rc genhtml_function_coverage=1 00:28:48.595 --rc genhtml_legend=1 00:28:48.595 --rc geninfo_all_blocks=1 00:28:48.595 --rc geninfo_unexecuted_blocks=1 00:28:48.595 00:28:48.595 ' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:48.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.595 --rc genhtml_branch_coverage=1 00:28:48.595 --rc genhtml_function_coverage=1 00:28:48.595 --rc genhtml_legend=1 00:28:48.595 --rc geninfo_all_blocks=1 00:28:48.595 --rc geninfo_unexecuted_blocks=1 00:28:48.595 00:28:48.595 ' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.595 17:45:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:55.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:55.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:55.189 Found net devices under 0000:86:00.0: cvl_0_0 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:55.189 Found net devices under 0000:86:00.1: cvl_0_1 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.189 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:28:55.190 00:28:55.190 --- 10.0.0.2 ping statistics --- 00:28:55.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.190 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:28:55.190 00:28:55.190 --- 10.0.0.1 ping statistics --- 00:28:55.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.190 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:55.190 17:45:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:57.727 Waiting for block devices as requested 00:28:57.727 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:57.727 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:57.727 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:57.727 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:57.727 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:57.727 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:57.986 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:57.986 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:57.986 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:58.245 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:58.245 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:58.245 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:58.245 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:58.505 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:58.505 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:58.505 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:58.764 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:58.764 No valid GPT data, bailing 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:58.764 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:59.028 00:28:59.028 Discovery Log Number of Records 2, Generation counter 2 00:28:59.028 =====Discovery Log Entry 0====== 00:28:59.028 trtype: tcp 00:28:59.028 adrfam: ipv4 00:28:59.029 subtype: current discovery subsystem 00:28:59.029 treq: not specified, sq flow control disable supported 00:28:59.029 portid: 1 00:28:59.029 trsvcid: 4420 00:28:59.029 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:59.029 traddr: 10.0.0.1 00:28:59.029 eflags: none 00:28:59.029 sectype: none 00:28:59.029 =====Discovery Log Entry 1====== 00:28:59.029 trtype: tcp 00:28:59.029 adrfam: ipv4 00:28:59.029 subtype: nvme subsystem 00:28:59.029 treq: not specified, sq flow control disable supported 00:28:59.029 portid: 1 00:28:59.029 trsvcid: 4420 00:28:59.029 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:59.029 traddr: 10.0.0.1 00:28:59.029 eflags: none 00:28:59.029 sectype: none 00:28:59.029 17:45:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:59.029 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:59.029 ===================================================== 00:28:59.029 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:59.029 ===================================================== 00:28:59.029 Controller Capabilities/Features 00:28:59.029 ================================ 00:28:59.029 Vendor ID: 0000 00:28:59.029 Subsystem Vendor ID: 0000 00:28:59.029 Serial Number: 6c1f47b29a44d036ee1c 00:28:59.029 Model Number: Linux 00:28:59.029 Firmware Version: 6.8.9-20 00:28:59.029 Recommended Arb Burst: 0 00:28:59.029 IEEE OUI Identifier: 00 00 00 00:28:59.029 Multi-path I/O 00:28:59.029 May have multiple subsystem ports: No 00:28:59.029 May have multiple controllers: No 00:28:59.029 Associated with SR-IOV VF: No 00:28:59.029 Max Data Transfer Size: Unlimited 00:28:59.029 Max Number of Namespaces: 0 00:28:59.029 Max Number of I/O Queues: 1024 00:28:59.029 NVMe Specification Version (VS): 1.3 00:28:59.029 NVMe Specification Version (Identify): 1.3 00:28:59.029 Maximum Queue Entries: 1024 00:28:59.029 Contiguous Queues Required: No 00:28:59.029 Arbitration Mechanisms Supported 00:28:59.029 Weighted Round Robin: Not Supported 00:28:59.029 Vendor Specific: Not Supported 00:28:59.030 Reset Timeout: 7500 ms 00:28:59.030 Doorbell Stride: 4 bytes 00:28:59.030 NVM Subsystem Reset: Not Supported 00:28:59.030 Command Sets Supported 00:28:59.030 NVM Command Set: Supported 00:28:59.030 Boot Partition: Not Supported 00:28:59.030 Memory Page Size Minimum: 4096 bytes 00:28:59.030 Memory Page Size Maximum: 4096 bytes 00:28:59.030 Persistent Memory Region: Not Supported 00:28:59.030 Optional Asynchronous Events Supported 00:28:59.030 Namespace Attribute Notices: Not Supported 00:28:59.030 Firmware Activation Notices: Not Supported 00:28:59.030 ANA Change Notices: Not Supported 00:28:59.030 PLE Aggregate Log Change Notices: Not Supported 00:28:59.030 LBA Status Info Alert Notices: Not Supported 00:28:59.030 EGE Aggregate Log Change Notices: Not Supported 00:28:59.030 Normal NVM Subsystem Shutdown event: Not Supported 00:28:59.030 Zone Descriptor Change Notices: Not Supported 00:28:59.030 Discovery Log Change Notices: Supported 00:28:59.030 Controller Attributes 00:28:59.030 128-bit Host Identifier: Not Supported 00:28:59.030 Non-Operational Permissive Mode: Not Supported 00:28:59.030 NVM Sets: Not Supported 00:28:59.030 Read Recovery Levels: Not Supported 00:28:59.030 Endurance Groups: Not Supported 00:28:59.030 Predictable Latency Mode: Not Supported 00:28:59.030 Traffic Based Keep ALive: Not Supported 00:28:59.030 Namespace Granularity: Not Supported 00:28:59.030 SQ Associations: Not Supported 00:28:59.030 UUID List: Not Supported 00:28:59.030 Multi-Domain Subsystem: Not Supported 00:28:59.030 Fixed Capacity Management: Not Supported 00:28:59.030 Variable Capacity Management: Not Supported 00:28:59.030 Delete Endurance Group: Not Supported 00:28:59.030 Delete NVM Set: Not Supported 00:28:59.030 Extended LBA Formats Supported: Not Supported 00:28:59.030 Flexible Data Placement Supported: Not Supported 00:28:59.030 00:28:59.030 Controller Memory Buffer Support 00:28:59.030 ================================ 00:28:59.030 Supported: No 00:28:59.030 00:28:59.030 Persistent Memory Region Support 00:28:59.030 ================================ 00:28:59.030 Supported: No 00:28:59.030 00:28:59.031 Admin Command Set Attributes 00:28:59.031 ============================ 00:28:59.031 Security Send/Receive: Not Supported 00:28:59.031 Format NVM: Not Supported 00:28:59.031 Firmware Activate/Download: Not Supported 00:28:59.031 Namespace Management: Not Supported 00:28:59.031 Device Self-Test: Not Supported 00:28:59.031 Directives: Not Supported 00:28:59.031 NVMe-MI: Not Supported 00:28:59.031 Virtualization Management: Not Supported 00:28:59.031 Doorbell Buffer Config: Not Supported 00:28:59.031 Get LBA Status Capability: Not Supported 00:28:59.031 Command & Feature Lockdown Capability: Not Supported 00:28:59.031 Abort Command Limit: 1 00:28:59.031 Async Event Request Limit: 1 00:28:59.031 Number of Firmware Slots: N/A 00:28:59.031 Firmware Slot 1 Read-Only: N/A 00:28:59.031 Firmware Activation Without Reset: N/A 00:28:59.031 Multiple Update Detection Support: N/A 00:28:59.031 Firmware Update Granularity: No Information Provided 00:28:59.031 Per-Namespace SMART Log: No 00:28:59.031 Asymmetric Namespace Access Log Page: Not Supported 00:28:59.031 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:59.031 Command Effects Log Page: Not Supported 00:28:59.031 Get Log Page Extended Data: Supported 00:28:59.031 Telemetry Log Pages: Not Supported 00:28:59.031 Persistent Event Log Pages: Not Supported 00:28:59.031 Supported Log Pages Log Page: May Support 00:28:59.031 Commands Supported & Effects Log Page: Not Supported 00:28:59.031 Feature Identifiers & Effects Log Page:May Support 00:28:59.031 NVMe-MI Commands & Effects Log Page: May Support 00:28:59.031 Data Area 4 for Telemetry Log: Not Supported 00:28:59.031 Error Log Page Entries Supported: 1 00:28:59.031 Keep Alive: Not Supported 00:28:59.031 00:28:59.031 NVM Command Set Attributes 00:28:59.031 ========================== 00:28:59.031 Submission Queue Entry Size 00:28:59.031 Max: 1 00:28:59.031 Min: 1 00:28:59.031 Completion Queue Entry Size 00:28:59.031 Max: 1 00:28:59.031 Min: 1 00:28:59.031 Number of Namespaces: 0 00:28:59.031 Compare Command: Not Supported 00:28:59.031 Write Uncorrectable Command: Not Supported 00:28:59.031 Dataset Management Command: Not Supported 00:28:59.031 Write Zeroes Command: Not Supported 00:28:59.031 Set Features Save Field: Not Supported 00:28:59.031 Reservations: Not Supported 00:28:59.031 Timestamp: Not Supported 00:28:59.031 Copy: Not Supported 00:28:59.032 Volatile Write Cache: Not Present 00:28:59.032 Atomic Write Unit (Normal): 1 00:28:59.032 Atomic Write Unit (PFail): 1 00:28:59.032 Atomic Compare & Write Unit: 1 00:28:59.032 Fused Compare & Write: Not Supported 00:28:59.032 Scatter-Gather List 00:28:59.032 SGL Command Set: Supported 00:28:59.032 SGL Keyed: Not Supported 00:28:59.032 SGL Bit Bucket Descriptor: Not Supported 00:28:59.032 SGL Metadata Pointer: Not Supported 00:28:59.032 Oversized SGL: Not Supported 00:28:59.032 SGL Metadata Address: Not Supported 00:28:59.032 SGL Offset: Supported 00:28:59.032 Transport SGL Data Block: Not Supported 00:28:59.032 Replay Protected Memory Block: Not Supported 00:28:59.032 00:28:59.032 Firmware Slot Information 00:28:59.032 ========================= 00:28:59.032 Active slot: 0 00:28:59.032 00:28:59.032 00:28:59.032 Error Log 00:28:59.032 ========= 00:28:59.032 00:28:59.032 Active Namespaces 00:28:59.032 ================= 00:28:59.032 Discovery Log Page 00:28:59.032 ================== 00:28:59.032 Generation Counter: 2 00:28:59.032 Number of Records: 2 00:28:59.032 Record Format: 0 00:28:59.032 00:28:59.032 Discovery Log Entry 0 00:28:59.032 ---------------------- 00:28:59.032 Transport Type: 3 (TCP) 00:28:59.032 Address Family: 1 (IPv4) 00:28:59.032 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:59.032 Entry Flags: 00:28:59.032 Duplicate Returned Information: 0 00:28:59.032 Explicit Persistent Connection Support for Discovery: 0 00:28:59.032 Transport Requirements: 00:28:59.032 Secure Channel: Not Specified 00:28:59.032 Port ID: 1 (0x0001) 00:28:59.032 Controller ID: 65535 (0xffff) 00:28:59.032 Admin Max SQ Size: 32 00:28:59.032 Transport Service Identifier: 4420 00:28:59.032 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:59.032 Transport Address: 10.0.0.1 00:28:59.032 Discovery Log Entry 1 00:28:59.032 ---------------------- 00:28:59.032 Transport Type: 3 (TCP) 00:28:59.032 Address Family: 1 (IPv4) 00:28:59.032 Subsystem Type: 2 (NVM Subsystem) 00:28:59.032 Entry Flags: 00:28:59.032 Duplicate Returned Information: 0 00:28:59.032 Explicit Persistent Connection Support for Discovery: 0 00:28:59.032 Transport Requirements: 00:28:59.032 Secure Channel: Not Specified 00:28:59.032 Port ID: 1 (0x0001) 00:28:59.032 Controller ID: 65535 (0xffff) 00:28:59.032 Admin Max SQ Size: 32 00:28:59.032 Transport Service Identifier: 4420 00:28:59.032 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:59.032 Transport Address: 10.0.0.1 00:28:59.032 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.032 get_feature(0x01) failed 00:28:59.032 get_feature(0x02) failed 00:28:59.032 get_feature(0x04) failed 00:28:59.032 ===================================================== 00:28:59.032 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:59.032 ===================================================== 00:28:59.032 Controller Capabilities/Features 00:28:59.032 ================================ 00:28:59.032 Vendor ID: 0000 00:28:59.032 Subsystem Vendor ID: 0000 00:28:59.032 Serial Number: c56b7bf143d7347990b0 00:28:59.032 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:59.032 Firmware Version: 6.8.9-20 00:28:59.032 Recommended Arb Burst: 6 00:28:59.032 IEEE OUI Identifier: 00 00 00 00:28:59.032 Multi-path I/O 00:28:59.032 May have multiple subsystem ports: Yes 00:28:59.032 May have multiple controllers: Yes 00:28:59.032 Associated with SR-IOV VF: No 00:28:59.032 Max Data Transfer Size: Unlimited 00:28:59.032 Max Number of Namespaces: 1024 00:28:59.032 Max Number of I/O Queues: 128 00:28:59.032 NVMe Specification Version (VS): 1.3 00:28:59.032 NVMe Specification Version (Identify): 1.3 00:28:59.032 Maximum Queue Entries: 1024 00:28:59.032 Contiguous Queues Required: No 00:28:59.032 Arbitration Mechanisms Supported 00:28:59.032 Weighted Round Robin: Not Supported 00:28:59.032 Vendor Specific: Not Supported 00:28:59.032 Reset Timeout: 7500 ms 00:28:59.032 Doorbell Stride: 4 bytes 00:28:59.032 NVM Subsystem Reset: Not Supported 00:28:59.032 Command Sets Supported 00:28:59.032 NVM Command Set: Supported 00:28:59.032 Boot Partition: Not Supported 00:28:59.032 Memory Page Size Minimum: 4096 bytes 00:28:59.032 Memory Page Size Maximum: 4096 bytes 00:28:59.032 Persistent Memory Region: Not Supported 00:28:59.032 Optional Asynchronous Events Supported 00:28:59.032 Namespace Attribute Notices: Supported 00:28:59.032 Firmware Activation Notices: Not Supported 00:28:59.032 ANA Change Notices: Supported 00:28:59.032 PLE Aggregate Log Change Notices: Not Supported 00:28:59.032 LBA Status Info Alert Notices: Not Supported 00:28:59.032 EGE Aggregate Log Change Notices: Not Supported 00:28:59.032 Normal NVM Subsystem Shutdown event: Not Supported 00:28:59.032 Zone Descriptor Change Notices: Not Supported 00:28:59.032 Discovery Log Change Notices: Not Supported 00:28:59.032 Controller Attributes 00:28:59.032 128-bit Host Identifier: Supported 00:28:59.032 Non-Operational Permissive Mode: Not Supported 00:28:59.032 NVM Sets: Not Supported 00:28:59.032 Read Recovery Levels: Not Supported 00:28:59.032 Endurance Groups: Not Supported 00:28:59.032 Predictable Latency Mode: Not Supported 00:28:59.032 Traffic Based Keep ALive: Supported 00:28:59.032 Namespace Granularity: Not Supported 00:28:59.032 SQ Associations: Not Supported 00:28:59.032 UUID List: Not Supported 00:28:59.032 Multi-Domain Subsystem: Not Supported 00:28:59.032 Fixed Capacity Management: Not Supported 00:28:59.032 Variable Capacity Management: Not Supported 00:28:59.032 Delete Endurance Group: Not Supported 00:28:59.032 Delete NVM Set: Not Supported 00:28:59.032 Extended LBA Formats Supported: Not Supported 00:28:59.032 Flexible Data Placement Supported: Not Supported 00:28:59.032 00:28:59.032 Controller Memory Buffer Support 00:28:59.032 ================================ 00:28:59.032 Supported: No 00:28:59.032 00:28:59.032 Persistent Memory Region Support 00:28:59.032 ================================ 00:28:59.032 Supported: No 00:28:59.032 00:28:59.032 Admin Command Set Attributes 00:28:59.032 ============================ 00:28:59.032 Security Send/Receive: Not Supported 00:28:59.032 Format NVM: Not Supported 00:28:59.032 Firmware Activate/Download: Not Supported 00:28:59.032 Namespace Management: Not Supported 00:28:59.032 Device Self-Test: Not Supported 00:28:59.032 Directives: Not Supported 00:28:59.032 NVMe-MI: Not Supported 00:28:59.032 Virtualization Management: Not Supported 00:28:59.032 Doorbell Buffer Config: Not Supported 00:28:59.032 Get LBA Status Capability: Not Supported 00:28:59.032 Command & Feature Lockdown Capability: Not Supported 00:28:59.032 Abort Command Limit: 4 00:28:59.032 Async Event Request Limit: 4 00:28:59.032 Number of Firmware Slots: N/A 00:28:59.032 Firmware Slot 1 Read-Only: N/A 00:28:59.032 Firmware Activation Without Reset: N/A 00:28:59.032 Multiple Update Detection Support: N/A 00:28:59.032 Firmware Update Granularity: No Information Provided 00:28:59.032 Per-Namespace SMART Log: Yes 00:28:59.032 Asymmetric Namespace Access Log Page: Supported 00:28:59.032 ANA Transition Time : 10 sec 00:28:59.032 00:28:59.032 Asymmetric Namespace Access Capabilities 00:28:59.032 ANA Optimized State : Supported 00:28:59.032 ANA Non-Optimized State : Supported 00:28:59.032 ANA Inaccessible State : Supported 00:28:59.032 ANA Persistent Loss State : Supported 00:28:59.032 ANA Change State : Supported 00:28:59.032 ANAGRPID is not changed : No 00:28:59.032 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:59.032 00:28:59.032 ANA Group Identifier Maximum : 128 00:28:59.032 Number of ANA Group Identifiers : 128 00:28:59.032 Max Number of Allowed Namespaces : 1024 00:28:59.032 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:59.032 Command Effects Log Page: Supported 00:28:59.032 Get Log Page Extended Data: Supported 00:28:59.032 Telemetry Log Pages: Not Supported 00:28:59.032 Persistent Event Log Pages: Not Supported 00:28:59.032 Supported Log Pages Log Page: May Support 00:28:59.032 Commands Supported & Effects Log Page: Not Supported 00:28:59.032 Feature Identifiers & Effects Log Page:May Support 00:28:59.032 NVMe-MI Commands & Effects Log Page: May Support 00:28:59.032 Data Area 4 for Telemetry Log: Not Supported 00:28:59.032 Error Log Page Entries Supported: 128 00:28:59.032 Keep Alive: Supported 00:28:59.032 Keep Alive Granularity: 1000 ms 00:28:59.032 00:28:59.032 NVM Command Set Attributes 00:28:59.032 ========================== 00:28:59.032 Submission Queue Entry Size 00:28:59.032 Max: 64 00:28:59.032 Min: 64 00:28:59.032 Completion Queue Entry Size 00:28:59.032 Max: 16 00:28:59.032 Min: 16 00:28:59.032 Number of Namespaces: 1024 00:28:59.033 Compare Command: Not Supported 00:28:59.033 Write Uncorrectable Command: Not Supported 00:28:59.033 Dataset Management Command: Supported 00:28:59.033 Write Zeroes Command: Supported 00:28:59.033 Set Features Save Field: Not Supported 00:28:59.033 Reservations: Not Supported 00:28:59.033 Timestamp: Not Supported 00:28:59.033 Copy: Not Supported 00:28:59.033 Volatile Write Cache: Present 00:28:59.033 Atomic Write Unit (Normal): 1 00:28:59.033 Atomic Write Unit (PFail): 1 00:28:59.033 Atomic Compare & Write Unit: 1 00:28:59.033 Fused Compare & Write: Not Supported 00:28:59.033 Scatter-Gather List 00:28:59.033 SGL Command Set: Supported 00:28:59.033 SGL Keyed: Not Supported 00:28:59.033 SGL Bit Bucket Descriptor: Not Supported 00:28:59.033 SGL Metadata Pointer: Not Supported 00:28:59.033 Oversized SGL: Not Supported 00:28:59.033 SGL Metadata Address: Not Supported 00:28:59.033 SGL Offset: Supported 00:28:59.033 Transport SGL Data Block: Not Supported 00:28:59.033 Replay Protected Memory Block: Not Supported 00:28:59.033 00:28:59.033 Firmware Slot Information 00:28:59.033 ========================= 00:28:59.033 Active slot: 0 00:28:59.033 00:28:59.033 Asymmetric Namespace Access 00:28:59.033 =========================== 00:28:59.033 Change Count : 0 00:28:59.033 Number of ANA Group Descriptors : 1 00:28:59.033 ANA Group Descriptor : 0 00:28:59.033 ANA Group ID : 1 00:28:59.033 Number of NSID Values : 1 00:28:59.033 Change Count : 0 00:28:59.033 ANA State : 1 00:28:59.033 Namespace Identifier : 1 00:28:59.033 00:28:59.033 Commands Supported and Effects 00:28:59.033 ============================== 00:28:59.033 Admin Commands 00:28:59.033 -------------- 00:28:59.033 Get Log Page (02h): Supported 00:28:59.033 Identify (06h): Supported 00:28:59.033 Abort (08h): Supported 00:28:59.033 Set Features (09h): Supported 00:28:59.033 Get Features (0Ah): Supported 00:28:59.033 Asynchronous Event Request (0Ch): Supported 00:28:59.033 Keep Alive (18h): Supported 00:28:59.033 I/O Commands 00:28:59.033 ------------ 00:28:59.033 Flush (00h): Supported 00:28:59.033 Write (01h): Supported LBA-Change 00:28:59.033 Read (02h): Supported 00:28:59.033 Write Zeroes (08h): Supported LBA-Change 00:28:59.033 Dataset Management (09h): Supported 00:28:59.033 00:28:59.033 Error Log 00:28:59.033 ========= 00:28:59.033 Entry: 0 00:28:59.033 Error Count: 0x3 00:28:59.033 Submission Queue Id: 0x0 00:28:59.033 Command Id: 0x5 00:28:59.033 Phase Bit: 0 00:28:59.033 Status Code: 0x2 00:28:59.033 Status Code Type: 0x0 00:28:59.033 Do Not Retry: 1 00:28:59.033 Error Location: 0x28 00:28:59.033 LBA: 0x0 00:28:59.033 Namespace: 0x0 00:28:59.033 Vendor Log Page: 0x0 00:28:59.033 ----------- 00:28:59.033 Entry: 1 00:28:59.033 Error Count: 0x2 00:28:59.033 Submission Queue Id: 0x0 00:28:59.033 Command Id: 0x5 00:28:59.033 Phase Bit: 0 00:28:59.033 Status Code: 0x2 00:28:59.033 Status Code Type: 0x0 00:28:59.033 Do Not Retry: 1 00:28:59.033 Error Location: 0x28 00:28:59.033 LBA: 0x0 00:28:59.033 Namespace: 0x0 00:28:59.033 Vendor Log Page: 0x0 00:28:59.033 ----------- 00:28:59.033 Entry: 2 00:28:59.033 Error Count: 0x1 00:28:59.033 Submission Queue Id: 0x0 00:28:59.033 Command Id: 0x4 00:28:59.033 Phase Bit: 0 00:28:59.033 Status Code: 0x2 00:28:59.033 Status Code Type: 0x0 00:28:59.033 Do Not Retry: 1 00:28:59.033 Error Location: 0x28 00:28:59.033 LBA: 0x0 00:28:59.033 Namespace: 0x0 00:28:59.033 Vendor Log Page: 0x0 00:28:59.033 00:28:59.033 Number of Queues 00:28:59.033 ================ 00:28:59.033 Number of I/O Submission Queues: 128 00:28:59.033 Number of I/O Completion Queues: 128 00:28:59.033 00:28:59.033 ZNS Specific Controller Data 00:28:59.033 ============================ 00:28:59.033 Zone Append Size Limit: 0 00:28:59.033 00:28:59.033 00:28:59.033 Active Namespaces 00:28:59.033 ================= 00:28:59.033 get_feature(0x05) failed 00:28:59.033 Namespace ID:1 00:28:59.033 Command Set Identifier: NVM (00h) 00:28:59.033 Deallocate: Supported 00:28:59.033 Deallocated/Unwritten Error: Not Supported 00:28:59.033 Deallocated Read Value: Unknown 00:28:59.033 Deallocate in Write Zeroes: Not Supported 00:28:59.033 Deallocated Guard Field: 0xFFFF 00:28:59.033 Flush: Supported 00:28:59.033 Reservation: Not Supported 00:28:59.033 Namespace Sharing Capabilities: Multiple Controllers 00:28:59.033 Size (in LBAs): 3125627568 (1490GiB) 00:28:59.033 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:59.033 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:59.033 UUID: a83a1f68-bcbe-44dc-b874-6823ad870ecd 00:28:59.033 Thin Provisioning: Not Supported 00:28:59.033 Per-NS Atomic Units: Yes 00:28:59.033 Atomic Boundary Size (Normal): 0 00:28:59.033 Atomic Boundary Size (PFail): 0 00:28:59.033 Atomic Boundary Offset: 0 00:28:59.033 NGUID/EUI64 Never Reused: No 00:28:59.033 ANA group ID: 1 00:28:59.033 Namespace Write Protected: No 00:28:59.033 Number of LBA Formats: 1 00:28:59.033 Current LBA Format: LBA Format #00 00:28:59.033 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:59.033 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.033 rmmod nvme_tcp 00:28:59.033 rmmod nvme_fabrics 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.033 17:45:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:29:01.569 17:46:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:04.103 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:04.103 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:04.362 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:04.362 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:04.362 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:05.738 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:05.738 00:29:05.738 real 0m17.299s 00:29:05.738 user 0m4.311s 00:29:05.738 sys 0m8.788s 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.738 ************************************ 00:29:05.738 END TEST nvmf_identify_kernel_target 00:29:05.738 ************************************ 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.738 ************************************ 00:29:05.738 START TEST nvmf_auth_host 00:29:05.738 ************************************ 00:29:05.738 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:05.998 * Looking for test storage... 00:29:05.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.998 17:46:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.999 --rc genhtml_branch_coverage=1 00:29:05.999 --rc genhtml_function_coverage=1 00:29:05.999 --rc genhtml_legend=1 00:29:05.999 --rc geninfo_all_blocks=1 00:29:05.999 --rc geninfo_unexecuted_blocks=1 00:29:05.999 00:29:05.999 ' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.999 --rc genhtml_branch_coverage=1 00:29:05.999 --rc genhtml_function_coverage=1 00:29:05.999 --rc genhtml_legend=1 00:29:05.999 --rc geninfo_all_blocks=1 00:29:05.999 --rc geninfo_unexecuted_blocks=1 00:29:05.999 00:29:05.999 ' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.999 --rc genhtml_branch_coverage=1 00:29:05.999 --rc genhtml_function_coverage=1 00:29:05.999 --rc genhtml_legend=1 00:29:05.999 --rc geninfo_all_blocks=1 00:29:05.999 --rc geninfo_unexecuted_blocks=1 00:29:05.999 00:29:05.999 ' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.999 --rc genhtml_branch_coverage=1 00:29:05.999 --rc genhtml_function_coverage=1 00:29:05.999 --rc genhtml_legend=1 00:29:05.999 --rc geninfo_all_blocks=1 00:29:05.999 --rc geninfo_unexecuted_blocks=1 00:29:05.999 00:29:05.999 ' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.999 17:46:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.565 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.565 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.566 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:29:12.566 00:29:12.566 --- 10.0.0.2 ping statistics --- 00:29:12.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.566 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:29:12.566 00:29:12.566 --- 10.0.0.1 ping statistics --- 00:29:12.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.566 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1238864 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1238864 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1238864 ']' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.566 17:46:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8b992ed828defba7dd520cd4807419ee 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.jja 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8b992ed828defba7dd520cd4807419ee 0 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8b992ed828defba7dd520cd4807419ee 0 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8b992ed828defba7dd520cd4807419ee 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.jja 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.jja 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.jja 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c0eee7c3d655331a6cfcc2d5871c3741ecf62ac653b754ab2fc8a7b9e17cfd1f 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.AaI 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c0eee7c3d655331a6cfcc2d5871c3741ecf62ac653b754ab2fc8a7b9e17cfd1f 3 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c0eee7c3d655331a6cfcc2d5871c3741ecf62ac653b754ab2fc8a7b9e17cfd1f 3 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c0eee7c3d655331a6cfcc2d5871c3741ecf62ac653b754ab2fc8a7b9e17cfd1f 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.AaI 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.AaI 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AaI 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:12.566 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9b6dd9cbdecdba03181c8d39b3e53c4fe9d6a2a52777c586 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.R84 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9b6dd9cbdecdba03181c8d39b3e53c4fe9d6a2a52777c586 0 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9b6dd9cbdecdba03181c8d39b3e53c4fe9d6a2a52777c586 0 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9b6dd9cbdecdba03181c8d39b3e53c4fe9d6a2a52777c586 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.R84 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.R84 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.R84 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6be37dc6d536fd110ff7a2e2c6dba6f27791785abc619b5d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ZRg 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6be37dc6d536fd110ff7a2e2c6dba6f27791785abc619b5d 2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6be37dc6d536fd110ff7a2e2c6dba6f27791785abc619b5d 2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6be37dc6d536fd110ff7a2e2c6dba6f27791785abc619b5d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ZRg 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ZRg 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZRg 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c9b8c72d0636b6bba6dacd41852b0b04 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.AtE 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c9b8c72d0636b6bba6dacd41852b0b04 1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c9b8c72d0636b6bba6dacd41852b0b04 1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c9b8c72d0636b6bba6dacd41852b0b04 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.AtE 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.AtE 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.AtE 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c1e4d55e24c9ec96f4e8e00a47b1f8ee 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.tL3 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c1e4d55e24c9ec96f4e8e00a47b1f8ee 1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c1e4d55e24c9ec96f4e8e00a47b1f8ee 1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c1e4d55e24c9ec96f4e8e00a47b1f8ee 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.tL3 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.tL3 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tL3 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0e819a10e64a04191c30c86fe35cd8b6b09d4850b201d817 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.n1d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0e819a10e64a04191c30c86fe35cd8b6b09d4850b201d817 2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0e819a10e64a04191c30c86fe35cd8b6b09d4850b201d817 2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0e819a10e64a04191c30c86fe35cd8b6b09d4850b201d817 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.n1d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.n1d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.n1d 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=be65d201c87b03a1ed01275fedadd350 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.UXu 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key be65d201c87b03a1ed01275fedadd350 0 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 be65d201c87b03a1ed01275fedadd350 0 00:29:12.567 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=be65d201c87b03a1ed01275fedadd350 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.UXu 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.UXu 00:29:12.568 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UXu 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fc9c12bd1450c043af0174432649031ecca7e7cf832ee601440261eb9276ec4f 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.sEW 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fc9c12bd1450c043af0174432649031ecca7e7cf832ee601440261eb9276ec4f 3 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fc9c12bd1450c043af0174432649031ecca7e7cf832ee601440261eb9276ec4f 3 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fc9c12bd1450c043af0174432649031ecca7e7cf832ee601440261eb9276ec4f 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.sEW 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.sEW 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sEW 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1238864 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1238864 ']' 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jja 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.827 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AaI ]] 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AaI 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.R84 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZRg ]] 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZRg 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.AtE 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tL3 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tL3 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.n1d 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UXu ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UXu 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sEW 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:13.087 17:46:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:15.622 Waiting for block devices as requested 00:29:15.622 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:15.881 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:15.881 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:16.140 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:16.140 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:16.140 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:16.140 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:16.399 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:16.399 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:16.399 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:16.399 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:16.658 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:16.658 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:16.658 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:16.916 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:16.916 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:16.916 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:17.483 No valid GPT data, bailing 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:29:17.483 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:17.742 00:29:17.742 Discovery Log Number of Records 2, Generation counter 2 00:29:17.742 =====Discovery Log Entry 0====== 00:29:17.742 trtype: tcp 00:29:17.742 adrfam: ipv4 00:29:17.742 subtype: current discovery subsystem 00:29:17.742 treq: not specified, sq flow control disable supported 00:29:17.742 portid: 1 00:29:17.742 trsvcid: 4420 00:29:17.742 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:17.742 traddr: 10.0.0.1 00:29:17.742 eflags: none 00:29:17.742 sectype: none 00:29:17.742 =====Discovery Log Entry 1====== 00:29:17.742 trtype: tcp 00:29:17.742 adrfam: ipv4 00:29:17.742 subtype: nvme subsystem 00:29:17.742 treq: not specified, sq flow control disable supported 00:29:17.742 portid: 1 00:29:17.742 trsvcid: 4420 00:29:17.742 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:17.742 traddr: 10.0.0.1 00:29:17.742 eflags: none 00:29:17.742 sectype: none 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.742 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.743 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 nvme0n1 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.002 17:46:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 nvme0n1 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.002 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.261 nvme0n1 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.261 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.520 nvme0n1 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:18.520 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.521 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.779 nvme0n1 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.779 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.780 17:46:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.038 nvme0n1 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.038 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.296 nvme0n1 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.296 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.297 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.556 nvme0n1 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.556 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.815 nvme0n1 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.815 17:46:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.075 nvme0n1 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.075 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.335 nvme0n1 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.335 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.635 nvme0n1 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.635 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.636 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.921 nvme0n1 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:20.921 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.922 17:46:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 nvme0n1 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.181 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.440 nvme0n1 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.440 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.699 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.959 nvme0n1 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.959 17:46:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.218 nvme0n1 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.218 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.478 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.737 nvme0n1 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.737 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.738 17:46:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.306 nvme0n1 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.306 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.565 nvme0n1 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:23.565 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.566 17:46:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.134 nvme0n1 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.134 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.702 nvme0n1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.703 17:46:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.271 nvme0n1 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.271 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.839 nvme0n1 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.839 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.098 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.099 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.099 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.099 17:46:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.099 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.667 nvme0n1 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.667 17:46:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.234 nvme0n1 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.234 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.494 nvme0n1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.494 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.753 nvme0n1 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.753 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.754 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.013 nvme0n1 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.013 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.014 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.014 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.014 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.014 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.014 17:46:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.014 nvme0n1 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.014 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 nvme0n1 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.273 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.533 nvme0n1 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.533 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.792 nvme0n1 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.792 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.793 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.793 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.793 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.793 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.793 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.050 17:46:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.050 nvme0n1 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.050 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.308 nvme0n1 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:29.308 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:29.309 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.568 nvme0n1 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.568 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.827 nvme0n1 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.827 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.086 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.086 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.086 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.086 17:46:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.086 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.345 nvme0n1 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.345 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.604 nvme0n1 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.604 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.605 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.864 nvme0n1 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.864 17:46:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.123 nvme0n1 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.123 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.382 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.641 nvme0n1 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.641 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.642 17:46:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.209 nvme0n1 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.209 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.210 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.468 nvme0n1 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.468 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.727 17:46:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.986 nvme0n1 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.986 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.555 nvme0n1 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.555 17:46:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.123 nvme0n1 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.123 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.124 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 nvme0n1 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.691 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.949 17:46:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.517 nvme0n1 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.517 17:46:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.084 nvme0n1 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.085 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.652 nvme0n1 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.652 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.653 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.912 nvme0n1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.912 17:46:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.171 nvme0n1 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.171 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.430 nvme0n1 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.430 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.431 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.690 nvme0n1 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.690 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.691 nvme0n1 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.691 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.949 17:46:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.949 nvme0n1 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.949 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.208 nvme0n1 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.208 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.466 nvme0n1 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.466 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.725 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.726 nvme0n1 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.726 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:38.986 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.987 17:46:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.987 nvme0n1 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.987 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.259 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.518 nvme0n1 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:39.518 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.519 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.778 nvme0n1 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:39.778 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.779 17:46:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.038 nvme0n1 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.038 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.297 nvme0n1 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.297 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.555 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.555 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.555 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.556 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.815 nvme0n1 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.815 17:46:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.073 nvme0n1 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.073 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.332 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 nvme0n1 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.591 17:46:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.159 nvme0n1 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.159 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.418 nvme0n1 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.418 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.677 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.936 nvme0n1 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.936 17:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.936 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGI5OTJlZDgyOGRlZmJhN2RkNTIwY2Q0ODA3NDE5ZWWFU6i6: 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: ]] 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBlZWU3YzNkNjU1MzMxYTZjZmNjMmQ1ODcxYzM3NDFlY2Y2MmFjNjUzYjc1NGFiMmZjOGE3YjllMTdjZmQxZozHTss=: 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.937 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.504 nvme0n1 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.504 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:43.763 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.764 17:46:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.332 nvme0n1 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.332 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.900 nvme0n1 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:44.900 17:46:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGU4MTlhMTBlNjRhMDQxOTFjMzBjODZmZTM1Y2Q4YjZiMDlkNDg1MGIyMDFkODE3juJVXA==: 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: ]] 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU2NWQyMDFjODdiMDNhMWVkMDEyNzVmZWRhZGQzNTADPI9D: 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.900 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.468 nvme0n1 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.468 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM5YzEyYmQxNDUwYzA0M2FmMDE3NDQzMjY0OTAzMWVjY2E3ZTdjZjgzMmVlNjAxNDQwMjYxZWI5Mjc2ZWM0Zmiiy/A=: 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.727 17:46:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.295 nvme0n1 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.295 request: 00:29:46.295 { 00:29:46.295 "name": "nvme0", 00:29:46.295 "trtype": "tcp", 00:29:46.295 "traddr": "10.0.0.1", 00:29:46.295 "adrfam": "ipv4", 00:29:46.295 "trsvcid": "4420", 00:29:46.295 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.295 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.295 "prchk_reftag": false, 00:29:46.295 "prchk_guard": false, 00:29:46.295 "hdgst": false, 00:29:46.295 "ddgst": false, 00:29:46.295 "allow_unrecognized_csi": false, 00:29:46.295 "method": "bdev_nvme_attach_controller", 00:29:46.295 "req_id": 1 00:29:46.295 } 00:29:46.295 Got JSON-RPC error response 00:29:46.295 response: 00:29:46.295 { 00:29:46.295 "code": -5, 00:29:46.295 "message": "Input/output error" 00:29:46.295 } 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:46.295 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.296 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.555 request: 00:29:46.555 { 00:29:46.555 "name": "nvme0", 00:29:46.555 "trtype": "tcp", 00:29:46.555 "traddr": "10.0.0.1", 00:29:46.555 "adrfam": "ipv4", 00:29:46.555 "trsvcid": "4420", 00:29:46.555 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.555 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.555 "prchk_reftag": false, 00:29:46.555 "prchk_guard": false, 00:29:46.555 "hdgst": false, 00:29:46.555 "ddgst": false, 00:29:46.555 "dhchap_key": "key2", 00:29:46.555 "allow_unrecognized_csi": false, 00:29:46.555 "method": "bdev_nvme_attach_controller", 00:29:46.555 "req_id": 1 00:29:46.555 } 00:29:46.555 Got JSON-RPC error response 00:29:46.555 response: 00:29:46.555 { 00:29:46.555 "code": -5, 00:29:46.555 "message": "Input/output error" 00:29:46.555 } 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.555 request: 00:29:46.555 { 00:29:46.555 "name": "nvme0", 00:29:46.555 "trtype": "tcp", 00:29:46.555 "traddr": "10.0.0.1", 00:29:46.555 "adrfam": "ipv4", 00:29:46.555 "trsvcid": "4420", 00:29:46.555 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:46.555 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:46.555 "prchk_reftag": false, 00:29:46.555 "prchk_guard": false, 00:29:46.555 "hdgst": false, 00:29:46.555 "ddgst": false, 00:29:46.555 "dhchap_key": "key1", 00:29:46.555 "dhchap_ctrlr_key": "ckey2", 00:29:46.555 "allow_unrecognized_csi": false, 00:29:46.555 "method": "bdev_nvme_attach_controller", 00:29:46.555 "req_id": 1 00:29:46.555 } 00:29:46.555 Got JSON-RPC error response 00:29:46.555 response: 00:29:46.555 { 00:29:46.555 "code": -5, 00:29:46.555 "message": "Input/output error" 00:29:46.555 } 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.555 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.815 nvme0n1 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.815 request: 00:29:46.815 { 00:29:46.815 "name": "nvme0", 00:29:46.815 "dhchap_key": "key1", 00:29:46.815 "dhchap_ctrlr_key": "ckey2", 00:29:46.815 "method": "bdev_nvme_set_keys", 00:29:46.815 "req_id": 1 00:29:46.815 } 00:29:46.815 Got JSON-RPC error response 00:29:46.815 response: 00:29:46.815 { 00:29:46.815 "code": -13, 00:29:46.815 "message": "Permission denied" 00:29:46.815 } 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:46.815 17:46:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:48.192 17:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI2ZGQ5Y2JkZWNkYmEwMzE4MWM4ZDM5YjNlNTNjNGZlOWQ2YTJhNTI3NzdjNTg2rfL0cw==: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmJlMzdkYzZkNTM2ZmQxMTBmZjdhMmUyYzZkYmE2ZjI3NzkxNzg1YWJjNjE5YjVkuBs3Ag==: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.146 nvme0n1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzliOGM3MmQwNjM2YjZiYmE2ZGFjZDQxODUyYjBiMDQ2R1wX: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFlNGQ1NWUyNGM5ZWM5NmY0ZThlMDBhNDdiMWY4ZWW5VaW/: 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.146 request: 00:29:49.146 { 00:29:49.146 "name": "nvme0", 00:29:49.146 "dhchap_key": "key2", 00:29:49.146 "dhchap_ctrlr_key": "ckey1", 00:29:49.146 "method": "bdev_nvme_set_keys", 00:29:49.146 "req_id": 1 00:29:49.146 } 00:29:49.146 Got JSON-RPC error response 00:29:49.146 response: 00:29:49.146 { 00:29:49.146 "code": -13, 00:29:49.146 "message": "Permission denied" 00:29:49.146 } 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.146 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.404 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.404 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:49.404 17:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.341 rmmod nvme_tcp 00:29:50.341 rmmod nvme_fabrics 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1238864 ']' 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1238864 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1238864 ']' 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1238864 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238864 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238864' 00:29:50.341 killing process with pid 1238864 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1238864 00:29:50.341 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1238864 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.600 17:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:29:53.136 17:46:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:55.672 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:55.672 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:57.051 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:57.310 17:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.jja /tmp/spdk.key-null.R84 /tmp/spdk.key-sha256.AtE /tmp/spdk.key-sha384.n1d /tmp/spdk.key-sha512.sEW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:57.310 17:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:59.846 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:59.846 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:59.846 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:59.847 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:00.106 00:30:00.106 real 0m54.277s 00:30:00.106 user 0m48.335s 00:30:00.106 sys 0m12.663s 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.106 ************************************ 00:30:00.106 END TEST nvmf_auth_host 00:30:00.106 ************************************ 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.106 ************************************ 00:30:00.106 START TEST nvmf_digest 00:30:00.106 ************************************ 00:30:00.106 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:00.366 * Looking for test storage... 00:30:00.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:00.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.366 --rc genhtml_branch_coverage=1 00:30:00.366 --rc genhtml_function_coverage=1 00:30:00.366 --rc genhtml_legend=1 00:30:00.366 --rc geninfo_all_blocks=1 00:30:00.366 --rc geninfo_unexecuted_blocks=1 00:30:00.366 00:30:00.366 ' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:00.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.366 --rc genhtml_branch_coverage=1 00:30:00.366 --rc genhtml_function_coverage=1 00:30:00.366 --rc genhtml_legend=1 00:30:00.366 --rc geninfo_all_blocks=1 00:30:00.366 --rc geninfo_unexecuted_blocks=1 00:30:00.366 00:30:00.366 ' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:00.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.366 --rc genhtml_branch_coverage=1 00:30:00.366 --rc genhtml_function_coverage=1 00:30:00.366 --rc genhtml_legend=1 00:30:00.366 --rc geninfo_all_blocks=1 00:30:00.366 --rc geninfo_unexecuted_blocks=1 00:30:00.366 00:30:00.366 ' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:00.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.366 --rc genhtml_branch_coverage=1 00:30:00.366 --rc genhtml_function_coverage=1 00:30:00.366 --rc genhtml_legend=1 00:30:00.366 --rc geninfo_all_blocks=1 00:30:00.366 --rc geninfo_unexecuted_blocks=1 00:30:00.366 00:30:00.366 ' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.366 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.367 17:46:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.937 17:47:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:06.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:06.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.937 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:06.937 Found net devices under 0000:86:00.0: cvl_0_0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:06.938 Found net devices under 0000:86:00.1: cvl_0_1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:30:06.938 00:30:06.938 --- 10.0.0.2 ping statistics --- 00:30:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.938 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:06.938 00:30:06.938 --- 10.0.0.1 ping statistics --- 00:30:06.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.938 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 ************************************ 00:30:06.938 START TEST nvmf_digest_clean 00:30:06.938 ************************************ 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1252625 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1252625 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1252625 ']' 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 [2024-10-14 17:47:05.407885] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:06.938 [2024-10-14 17:47:05.407932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.938 [2024-10-14 17:47:05.480753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.938 [2024-10-14 17:47:05.521894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.938 [2024-10-14 17:47:05.521928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.938 [2024-10-14 17:47:05.521935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.938 [2024-10-14 17:47:05.521942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.938 [2024-10-14 17:47:05.521947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.938 [2024-10-14 17:47:05.522490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 null0 00:30:06.938 [2024-10-14 17:47:05.676713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.938 [2024-10-14 17:47:05.700912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:06.938 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1252648 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1252648 /var/tmp/bperf.sock 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1252648 ']' 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.939 [2024-10-14 17:47:05.756206] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:06.939 [2024-10-14 17:47:05.756247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252648 ] 00:30:06.939 [2024-10-14 17:47:05.824400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.939 [2024-10-14 17:47:05.866420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:06.939 17:47:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:07.198 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.198 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.457 nvme0n1 00:30:07.457 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:07.457 17:47:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:07.457 Running I/O for 2 seconds... 00:30:09.461 24522.00 IOPS, 95.79 MiB/s [2024-10-14T15:47:08.599Z] 24980.50 IOPS, 97.58 MiB/s 00:30:09.461 Latency(us) 00:30:09.461 [2024-10-14T15:47:08.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.461 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:09.461 nvme0n1 : 2.00 24992.65 97.63 0.00 0.00 5116.54 2496.61 14854.83 00:30:09.461 [2024-10-14T15:47:08.599Z] =================================================================================================================== 00:30:09.461 [2024-10-14T15:47:08.599Z] Total : 24992.65 97.63 0.00 0.00 5116.54 2496.61 14854.83 00:30:09.461 { 00:30:09.461 "results": [ 00:30:09.461 { 00:30:09.461 "job": "nvme0n1", 00:30:09.461 "core_mask": "0x2", 00:30:09.461 "workload": "randread", 00:30:09.461 "status": "finished", 00:30:09.461 "queue_depth": 128, 00:30:09.461 "io_size": 4096, 00:30:09.461 "runtime": 2.004149, 00:30:09.461 "iops": 24992.652741886955, 00:30:09.461 "mibps": 97.62754977299592, 00:30:09.461 "io_failed": 0, 00:30:09.461 "io_timeout": 0, 00:30:09.461 "avg_latency_us": 5116.538799413234, 00:30:09.461 "min_latency_us": 2496.609523809524, 00:30:09.461 "max_latency_us": 14854.826666666666 00:30:09.461 } 00:30:09.461 ], 00:30:09.461 "core_count": 1 00:30:09.461 } 00:30:09.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:09.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:09.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:09.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:09.461 | select(.opcode=="crc32c") 00:30:09.461 | "\(.module_name) \(.executed)"' 00:30:09.461 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1252648 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1252648 ']' 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1252648 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1252648 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1252648' 00:30:09.721 killing process with pid 1252648 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1252648 00:30:09.721 Received shutdown signal, test time was about 2.000000 seconds 00:30:09.721 00:30:09.721 Latency(us) 00:30:09.721 [2024-10-14T15:47:08.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.721 [2024-10-14T15:47:08.859Z] =================================================================================================================== 00:30:09.721 [2024-10-14T15:47:08.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.721 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1252648 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253130 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253130 /var/tmp/bperf.sock 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1253130 ']' 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:09.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.981 17:47:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.981 [2024-10-14 17:47:08.983117] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:09.981 [2024-10-14 17:47:08.983166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253130 ] 00:30:09.981 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:09.981 Zero copy mechanism will not be used. 00:30:09.981 [2024-10-14 17:47:09.051800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.981 [2024-10-14 17:47:09.088913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.241 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.241 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:10.241 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:10.241 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:10.241 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:10.499 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.499 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.758 nvme0n1 00:30:10.758 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:10.758 17:47:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:10.758 Zero copy mechanism will not be used. 00:30:10.758 Running I/O for 2 seconds... 00:30:12.629 5952.00 IOPS, 744.00 MiB/s [2024-10-14T15:47:11.767Z] 5961.00 IOPS, 745.12 MiB/s 00:30:12.629 Latency(us) 00:30:12.629 [2024-10-14T15:47:11.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:12.629 nvme0n1 : 2.00 5960.77 745.10 0.00 0.00 2681.52 951.83 4306.65 00:30:12.629 [2024-10-14T15:47:11.767Z] =================================================================================================================== 00:30:12.629 [2024-10-14T15:47:11.767Z] Total : 5960.77 745.10 0.00 0.00 2681.52 951.83 4306.65 00:30:12.629 { 00:30:12.629 "results": [ 00:30:12.629 { 00:30:12.629 "job": "nvme0n1", 00:30:12.629 "core_mask": "0x2", 00:30:12.629 "workload": "randread", 00:30:12.629 "status": "finished", 00:30:12.629 "queue_depth": 16, 00:30:12.629 "io_size": 131072, 00:30:12.629 "runtime": 2.003098, 00:30:12.629 "iops": 5960.766772269754, 00:30:12.629 "mibps": 745.0958465337193, 00:30:12.629 "io_failed": 0, 00:30:12.629 "io_timeout": 0, 00:30:12.629 "avg_latency_us": 2681.5156468054556, 00:30:12.629 "min_latency_us": 951.832380952381, 00:30:12.630 "max_latency_us": 4306.651428571428 00:30:12.630 } 00:30:12.630 ], 00:30:12.630 "core_count": 1 00:30:12.630 } 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.888 | select(.opcode=="crc32c") 00:30:12.888 | "\(.module_name) \(.executed)"' 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253130 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1253130 ']' 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1253130 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:12.888 17:47:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:12.888 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253130 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253130' 00:30:13.148 killing process with pid 1253130 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1253130 00:30:13.148 Received shutdown signal, test time was about 2.000000 seconds 00:30:13.148 00:30:13.148 Latency(us) 00:30:13.148 [2024-10-14T15:47:12.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.148 [2024-10-14T15:47:12.286Z] =================================================================================================================== 00:30:13.148 [2024-10-14T15:47:12.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1253130 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253717 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253717 /var/tmp/bperf.sock 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1253717 ']' 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:13.148 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:13.148 [2024-10-14 17:47:12.249953] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:13.148 [2024-10-14 17:47:12.250000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253717 ] 00:30:13.408 [2024-10-14 17:47:12.317339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.408 [2024-10-14 17:47:12.359395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.408 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.408 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:13.408 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:13.408 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:13.408 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.666 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.666 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.925 nvme0n1 00:30:13.925 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:13.925 17:47:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.184 Running I/O for 2 seconds... 00:30:16.055 27470.00 IOPS, 107.30 MiB/s [2024-10-14T15:47:15.193Z] 27547.00 IOPS, 107.61 MiB/s 00:30:16.055 Latency(us) 00:30:16.055 [2024-10-14T15:47:15.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.055 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.055 nvme0n1 : 2.01 27551.10 107.62 0.00 0.00 4636.96 3261.20 9549.53 00:30:16.055 [2024-10-14T15:47:15.193Z] =================================================================================================================== 00:30:16.055 [2024-10-14T15:47:15.193Z] Total : 27551.10 107.62 0.00 0.00 4636.96 3261.20 9549.53 00:30:16.055 { 00:30:16.055 "results": [ 00:30:16.055 { 00:30:16.055 "job": "nvme0n1", 00:30:16.055 "core_mask": "0x2", 00:30:16.055 "workload": "randwrite", 00:30:16.055 "status": "finished", 00:30:16.055 "queue_depth": 128, 00:30:16.055 "io_size": 4096, 00:30:16.055 "runtime": 2.0058, 00:30:16.055 "iops": 27551.10180476618, 00:30:16.055 "mibps": 107.62149142486788, 00:30:16.055 "io_failed": 0, 00:30:16.055 "io_timeout": 0, 00:30:16.055 "avg_latency_us": 4636.9554052987405, 00:30:16.055 "min_latency_us": 3261.1961904761906, 00:30:16.055 "max_latency_us": 9549.531428571428 00:30:16.055 } 00:30:16.055 ], 00:30:16.055 "core_count": 1 00:30:16.055 } 00:30:16.055 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:16.055 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:16.055 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:16.055 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:16.055 | select(.opcode=="crc32c") 00:30:16.055 | "\(.module_name) \(.executed)"' 00:30:16.055 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253717 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1253717 ']' 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1253717 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253717 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253717' 00:30:16.314 killing process with pid 1253717 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1253717 00:30:16.314 Received shutdown signal, test time was about 2.000000 seconds 00:30:16.314 00:30:16.314 Latency(us) 00:30:16.314 [2024-10-14T15:47:15.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.314 [2024-10-14T15:47:15.452Z] =================================================================================================================== 00:30:16.314 [2024-10-14T15:47:15.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.314 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1253717 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1254287 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1254287 /var/tmp/bperf.sock 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1254287 ']' 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.572 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:16.572 [2024-10-14 17:47:15.573479] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:16.572 [2024-10-14 17:47:15.573526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254287 ] 00:30:16.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:16.572 Zero copy mechanism will not be used. 00:30:16.572 [2024-10-14 17:47:15.641722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.572 [2024-10-14 17:47:15.678379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.831 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.831 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:30:16.832 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:16.832 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:16.832 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:17.090 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.090 17:47:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.349 nvme0n1 00:30:17.349 17:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:17.349 17:47:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:17.349 Zero copy mechanism will not be used. 00:30:17.349 Running I/O for 2 seconds... 00:30:19.664 7170.00 IOPS, 896.25 MiB/s [2024-10-14T15:47:18.802Z] 7208.50 IOPS, 901.06 MiB/s 00:30:19.664 Latency(us) 00:30:19.664 [2024-10-14T15:47:18.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:19.664 nvme0n1 : 2.00 7207.41 900.93 0.00 0.00 2216.49 1310.72 11297.16 00:30:19.664 [2024-10-14T15:47:18.802Z] =================================================================================================================== 00:30:19.664 [2024-10-14T15:47:18.802Z] Total : 7207.41 900.93 0.00 0.00 2216.49 1310.72 11297.16 00:30:19.664 { 00:30:19.664 "results": [ 00:30:19.664 { 00:30:19.664 "job": "nvme0n1", 00:30:19.664 "core_mask": "0x2", 00:30:19.664 "workload": "randwrite", 00:30:19.664 "status": "finished", 00:30:19.664 "queue_depth": 16, 00:30:19.664 "io_size": 131072, 00:30:19.664 "runtime": 2.003078, 00:30:19.664 "iops": 7207.4077993967285, 00:30:19.664 "mibps": 900.9259749245911, 00:30:19.664 "io_failed": 0, 00:30:19.664 "io_timeout": 0, 00:30:19.664 "avg_latency_us": 2216.4918526141496, 00:30:19.664 "min_latency_us": 1310.72, 00:30:19.664 "max_latency_us": 11297.158095238095 00:30:19.664 } 00:30:19.664 ], 00:30:19.664 "core_count": 1 00:30:19.664 } 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:19.664 | select(.opcode=="crc32c") 00:30:19.664 | "\(.module_name) \(.executed)"' 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:19.664 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1254287 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1254287 ']' 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1254287 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254287 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254287' 00:30:19.665 killing process with pid 1254287 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1254287 00:30:19.665 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.665 00:30:19.665 Latency(us) 00:30:19.665 [2024-10-14T15:47:18.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.665 [2024-10-14T15:47:18.803Z] =================================================================================================================== 00:30:19.665 [2024-10-14T15:47:18.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.665 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1254287 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1252625 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1252625 ']' 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1252625 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1252625 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1252625' 00:30:19.924 killing process with pid 1252625 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1252625 00:30:19.924 17:47:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1252625 00:30:19.924 00:30:19.924 real 0m13.686s 00:30:19.924 user 0m26.109s 00:30:19.924 sys 0m4.633s 00:30:19.924 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.924 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.924 ************************************ 00:30:19.924 END TEST nvmf_digest_clean 00:30:19.924 ************************************ 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:20.183 ************************************ 00:30:20.183 START TEST nvmf_digest_error 00:30:20.183 ************************************ 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:20.183 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1254789 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1254789 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1254789 ']' 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.184 [2024-10-14 17:47:19.159431] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:20.184 [2024-10-14 17:47:19.159475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.184 [2024-10-14 17:47:19.213412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.184 [2024-10-14 17:47:19.254698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.184 [2024-10-14 17:47:19.254731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.184 [2024-10-14 17:47:19.254738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.184 [2024-10-14 17:47:19.254744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.184 [2024-10-14 17:47:19.254749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.184 [2024-10-14 17:47:19.255319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.184 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 [2024-10-14 17:47:19.347812] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 null0 00:30:20.443 [2024-10-14 17:47:19.439040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.443 [2024-10-14 17:47:19.463232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1254980 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1254980 /var/tmp/bperf.sock 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1254980 ']' 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:20.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.443 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 [2024-10-14 17:47:19.515712] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:20.443 [2024-10-14 17:47:19.515753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254980 ] 00:30:20.443 [2024-10-14 17:47:19.583535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.702 [2024-10-14 17:47:19.625811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.702 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.702 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:20.702 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.702 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.960 17:47:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.219 nvme0n1 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:21.219 17:47:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.219 Running I/O for 2 seconds... 00:30:21.219 [2024-10-14 17:47:20.294822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.294863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.294873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.219 [2024-10-14 17:47:20.307318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.307342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.307351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.219 [2024-10-14 17:47:20.318748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.318769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.318778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.219 [2024-10-14 17:47:20.327330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.327352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.327360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.219 [2024-10-14 17:47:20.339725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.339746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.339755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.219 [2024-10-14 17:47:20.352106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.219 [2024-10-14 17:47:20.352128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.219 [2024-10-14 17:47:20.352136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.364791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.364812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.377546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.377568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.377576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.390160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.390181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.390189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.401425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.401445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.401453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.410327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.410347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.410355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.421255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.421277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.421286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.433988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.434009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.434018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.446741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.446761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.446769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.460092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.460112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.460121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.471512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.471532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.471540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.481695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.481727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.490593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.490618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.490627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.500326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.500347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.500354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.510423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.510443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.510451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.520182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.520202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.529997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.530016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.530024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.539646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.539667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.539675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.548132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.548153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.548161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.557696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.557725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.568654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.568680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.568689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.578403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.578423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.578432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.587561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.587583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.587591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.596806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.596827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.596835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.607984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.608006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.608014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.479 [2024-10-14 17:47:20.617793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.479 [2024-10-14 17:47:20.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.479 [2024-10-14 17:47:20.617825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.627362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.627383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.627391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.637843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.637864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.637872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.645484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.645505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.645513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.655898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.655919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.667006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.667028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.667035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.677697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.677718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.677727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.686388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.686408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.686417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.697193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.697214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.697222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.709388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.709408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.709416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.720853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.720874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.720883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.729877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.729897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.729905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.741257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.741285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.741293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.749959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.749981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.749989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.760039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.760060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.760068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.769418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.769447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.780750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.780771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.780779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.791762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.739 [2024-10-14 17:47:20.791782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.739 [2024-10-14 17:47:20.791790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.739 [2024-10-14 17:47:20.804113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.804136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.804144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.815054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.815074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.815082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.823762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.823782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.823790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.834410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.834439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.845642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.845664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.845671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.855064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.855085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.864400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.864420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.864428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.740 [2024-10-14 17:47:20.873519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.740 [2024-10-14 17:47:20.873540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.740 [2024-10-14 17:47:20.873548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.882589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.882615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.882624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.892253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.892273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.892281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.902446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.902466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.902475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.911146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.911167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.911178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.920093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.920112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.920120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.929581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.929606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.929615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.939558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.939579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.939588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.949306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.949326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.949334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.961496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.961517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.961525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.971448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.971468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.971476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.981313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.981334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.981342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:20.990393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:20.990414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:20.990422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:21.000778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:21.000802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:21.000809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:21.012035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:21.012056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:21.012063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:21.021572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:21.021592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:21.021605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.999 [2024-10-14 17:47:21.030346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:21.999 [2024-10-14 17:47:21.030366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.999 [2024-10-14 17:47:21.030373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.038749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.038768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.038777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.049075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.049103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.058841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.058860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.069282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.069301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.069309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.077972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.077991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.087491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.087511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.087518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.097184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.097203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.097211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.106578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.106597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.106611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.114885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.114905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.114913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.125830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.125849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.125857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.000 [2024-10-14 17:47:21.138384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.000 [2024-10-14 17:47:21.138403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.000 [2024-10-14 17:47:21.138412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.148766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.148786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.148795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.157351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.157371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.157378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.170617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.170641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.170650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.179109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.179131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.179139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.190936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.190957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.190965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.200428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.200447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.200454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.209941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.209960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.209968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.220188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.220209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.220217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.228840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.228860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.228868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.239747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.239767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.248475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.248495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.259 [2024-10-14 17:47:21.248502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.259 [2024-10-14 17:47:21.260910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.259 [2024-10-14 17:47:21.260929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.272405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.272425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.272433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 24631.00 IOPS, 96.21 MiB/s [2024-10-14T15:47:21.398Z] [2024-10-14 17:47:21.282281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.282300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.282308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.294839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.294858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.294866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.307126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.307146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.307153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.315154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.315173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.315181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.327298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.327318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.327326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.339355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.339375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.339383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.350835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.350855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.350866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.358985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.359005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.359012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.371060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.371079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.371086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.382745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.382765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.260 [2024-10-14 17:47:21.391405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.260 [2024-10-14 17:47:21.391425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.260 [2024-10-14 17:47:21.391433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.403958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.403978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.403987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.416022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.416042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.416049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.424250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.424270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.424278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.435597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.435621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.435629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.448082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.448105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.448113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.460667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.460687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.460695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.471761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.471781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.471788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.519 [2024-10-14 17:47:21.483176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.519 [2024-10-14 17:47:21.483195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.519 [2024-10-14 17:47:21.483203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.492161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.492181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.492189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.504057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.504077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.504085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.515143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.515162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.515170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.523807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.523826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.523834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.536266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.536286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.536294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.548704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.548725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.548733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.561266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.561285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.573674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.573693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.573701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.585063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.585082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.585090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.594053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.594073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.594081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.606399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.606419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.606427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.618978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.618997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.619005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.629916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.629935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.629943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.639216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.639240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.639248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.520 [2024-10-14 17:47:21.650477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.520 [2024-10-14 17:47:21.650496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.520 [2024-10-14 17:47:21.650504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.659913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.659933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.659941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.668262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.668281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.668288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.677490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.677510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.677518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.687265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.687284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.687292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.696940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.696958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.696966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.705556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.705583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.715038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.715057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.715065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.726271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.726291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.726299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.734298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.734318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.734326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.746432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.746452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.746461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.754620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.754640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.766382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.766403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.766411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.775965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.775985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.775994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.785375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.785394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.785403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.794463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.794483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.794491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.802784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.802804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.813325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.813352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.824347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.824367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.824374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.832147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.832167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.832175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.843994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.844014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.844022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.856952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.866736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.866755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.866763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.876048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.876068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.885111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.885130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.780 [2024-10-14 17:47:21.885138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.780 [2024-10-14 17:47:21.896432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.780 [2024-10-14 17:47:21.896454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.781 [2024-10-14 17:47:21.896462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.781 [2024-10-14 17:47:21.904247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.781 [2024-10-14 17:47:21.904266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.781 [2024-10-14 17:47:21.904273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.781 [2024-10-14 17:47:21.916698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:22.781 [2024-10-14 17:47:21.916718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.781 [2024-10-14 17:47:21.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.927906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.927926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.927934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.936860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.936878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.936886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.948359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.948379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.948387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.961001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.961021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.961029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.972790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.972817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.983734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.983754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.983761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:21.993043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:21.993062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:21.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.003214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.003236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.003244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.014964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.014983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.014991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.023416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.023439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.035996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.036017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.036025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.044542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.044563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.044571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.054301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.054321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.054329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.063438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.063457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.063466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.073067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.073088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.073102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.082929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.082949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.082956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.092762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.092781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.092788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.100939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.100966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.110477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.110497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.110505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.119917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.119937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.119945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.130401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.130420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.130428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.040 [2024-10-14 17:47:22.138047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.040 [2024-10-14 17:47:22.138071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.040 [2024-10-14 17:47:22.138079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.041 [2024-10-14 17:47:22.147019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.041 [2024-10-14 17:47:22.147039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.041 [2024-10-14 17:47:22.147047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.041 [2024-10-14 17:47:22.156830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.041 [2024-10-14 17:47:22.156850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.041 [2024-10-14 17:47:22.156858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.041 [2024-10-14 17:47:22.166902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.041 [2024-10-14 17:47:22.166922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.041 [2024-10-14 17:47:22.166930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.041 [2024-10-14 17:47:22.175087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.041 [2024-10-14 17:47:22.175106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.041 [2024-10-14 17:47:22.175114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.300 [2024-10-14 17:47:22.186364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.300 [2024-10-14 17:47:22.186383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.300 [2024-10-14 17:47:22.186391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.196305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.196324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.204780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.204801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.204809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.216336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.216357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.216365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.228487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.228506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.228514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.241040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.241059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.241071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.249353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.249374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.249382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.261252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.261271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.261280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.269918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.269938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.269946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 [2024-10-14 17:47:22.281393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17b4b80) 00:30:23.301 [2024-10-14 17:47:22.281412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.301 [2024-10-14 17:47:22.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:23.301 24629.00 IOPS, 96.21 MiB/s 00:30:23.301 Latency(us) 00:30:23.301 [2024-10-14T15:47:22.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.301 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:23.301 nvme0n1 : 2.05 24147.49 94.33 0.00 0.00 5190.57 2324.97 43191.34 00:30:23.301 [2024-10-14T15:47:22.439Z] =================================================================================================================== 00:30:23.301 [2024-10-14T15:47:22.439Z] Total : 24147.49 94.33 0.00 0.00 5190.57 2324.97 43191.34 00:30:23.301 { 00:30:23.301 "results": [ 00:30:23.301 { 00:30:23.301 "job": "nvme0n1", 00:30:23.301 "core_mask": "0x2", 00:30:23.301 "workload": "randread", 00:30:23.301 "status": "finished", 00:30:23.301 "queue_depth": 128, 00:30:23.301 "io_size": 4096, 00:30:23.301 "runtime": 2.045471, 00:30:23.301 "iops": 24147.49463570982, 00:30:23.301 "mibps": 94.32615092074148, 00:30:23.301 "io_failed": 0, 00:30:23.301 "io_timeout": 0, 00:30:23.301 "avg_latency_us": 5190.5717138634445, 00:30:23.301 "min_latency_us": 2324.967619047619, 00:30:23.301 "max_latency_us": 43191.34476190476 00:30:23.301 } 00:30:23.301 ], 00:30:23.301 "core_count": 1 00:30:23.301 } 00:30:23.301 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:23.301 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:23.301 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:23.301 | .driver_specific 00:30:23.301 | .nvme_error 00:30:23.301 | .status_code 00:30:23.301 | .command_transient_transport_error' 00:30:23.301 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1254980 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1254980 ']' 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1254980 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254980 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254980' 00:30:23.560 killing process with pid 1254980 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1254980 00:30:23.560 Received shutdown signal, test time was about 2.000000 seconds 00:30:23.560 00:30:23.560 Latency(us) 00:30:23.560 [2024-10-14T15:47:22.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.560 [2024-10-14T15:47:22.698Z] =================================================================================================================== 00:30:23.560 [2024-10-14T15:47:22.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.560 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1254980 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255500 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255500 /var/tmp/bperf.sock 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1255500 ']' 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:23.819 17:47:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:23.819 [2024-10-14 17:47:22.816837] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:23.819 [2024-10-14 17:47:22.816884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255500 ] 00:30:23.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:23.819 Zero copy mechanism will not be used. 00:30:23.819 [2024-10-14 17:47:22.883154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.819 [2024-10-14 17:47:22.924852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.078 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:24.078 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:24.078 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.078 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.078 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:24.337 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.337 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.337 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.337 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.337 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.597 nvme0n1 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:24.597 17:47:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.858 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.858 Zero copy mechanism will not be used. 00:30:24.858 Running I/O for 2 seconds... 00:30:24.858 [2024-10-14 17:47:23.752191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.752226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.752237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.757558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.757583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.757592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.762979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.763000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.763008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.768348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.768373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.768382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.773656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.773678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.773687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.778885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.778906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.778914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.784123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.784152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.789260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.789280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.789288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.794420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.794444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.794452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.799611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.799631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.799639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.804744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.804764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.804773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.809834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.809864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.815069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.815089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.815097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.820332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.820352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.820360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.825598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.825625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.825634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.830788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.830808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.830816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.835985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.836004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.836012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.841186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.841206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.841214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.846568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.846588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.846597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.851837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.851867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.851875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.857018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.857038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.857050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.858 [2024-10-14 17:47:23.861887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.858 [2024-10-14 17:47:23.861908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.858 [2024-10-14 17:47:23.861915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.867080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.867101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.867109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.872576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.872596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.872609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.877913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.877933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.877941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.883194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.883215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.883223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.888475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.888495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.888503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.893896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.893916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.893924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.899348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.899369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.899377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.904895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.904919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.904927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.910430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.910451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.910459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.915840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.915860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.915868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.921310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.921331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.921339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.926739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.926759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.926767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.932173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.932194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.932202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.937389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.937410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.937418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.942656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.942676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.942684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.947925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.947945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.947960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.953211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.953232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.953240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.958522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.958543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.963812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.963832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.963840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.969373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.969394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.969402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.974979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.974999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.975006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.980312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.980333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.980341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.985742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.985763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.985771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.991212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.991241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.991249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:24.859 [2024-10-14 17:47:23.996648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:24.859 [2024-10-14 17:47:23.996672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:24.859 [2024-10-14 17:47:23.996681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.002114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.002135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.002143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.007509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.007530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.007538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.013194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.013215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.013223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.018847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.018869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.018877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.024323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.024343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.024351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.030014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.030034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.030042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.035608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.035629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.040937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.040956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.040964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.046517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.120 [2024-10-14 17:47:24.046538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.120 [2024-10-14 17:47:24.046546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.120 [2024-10-14 17:47:24.052032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.052052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.052060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.057139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.057160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.057167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.062425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.062445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.062453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.067668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.067688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.067696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.073120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.073141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.073149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.078492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.078513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.078521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.083937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.083958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.083966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.089526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.089557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.095007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.095028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.095035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.100519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.100546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.105942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.105962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.111418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.111438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.111446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.116915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.116935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.116943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.123395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.123416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.123425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.130866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.130888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.130895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.137705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.137725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.144955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.144979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.144987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.150473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.150493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.150500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.156044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.156064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.156072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.161484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.161505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.161513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.167360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.167381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.167389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.173383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.173404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.173411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.178981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.179000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.179008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.183856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.183876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.183884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.189020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.189040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.189048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.194460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.194480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.194488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.199752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.199773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.199780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.204941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.204962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.204970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.121 [2024-10-14 17:47:24.210405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.121 [2024-10-14 17:47:24.210425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.121 [2024-10-14 17:47:24.210433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.215819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.215847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.221288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.221309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.221317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.226728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.226748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.226756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.232081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.232101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.232109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.237415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.237435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.242941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.242961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.242969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.248285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.248306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.248314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.253732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.253752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.253760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.122 [2024-10-14 17:47:24.259234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.122 [2024-10-14 17:47:24.259257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.122 [2024-10-14 17:47:24.259266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.264824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.264847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.264855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.270312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.270333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.270341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.275586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.275612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.275620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.280910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.280931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.280939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.286177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.286201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.286209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.291667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.291687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.291694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.297115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.297136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.297144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.302539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.302558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.302567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.308001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.308021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.308029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.313404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.313424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.313432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.318862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.318891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.324383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.324403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.324411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.329813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.382 [2024-10-14 17:47:24.329833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.382 [2024-10-14 17:47:24.329841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.382 [2024-10-14 17:47:24.335309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.335329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.335336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.340683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.340703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.340711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.346015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.346035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.346043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.351462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.351482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.351490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.357142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.357163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.357170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.362437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.362457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.362464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.367954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.367975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.373424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.373444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.378844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.378863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.384361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.384382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.384389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.389951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.389972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.389979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.395388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.395416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.400737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.400757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.400765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.406350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.406371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.406378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.411761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.411781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.411788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.417125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.417145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.417153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.422523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.422544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.422551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.427903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.427927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.427934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.433648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.433669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.433677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.440487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.440509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.440517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.445837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.445858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.445866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.450818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.450839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.450848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.455589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.455616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.460824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.460846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.460854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.467434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.467455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.467463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.472252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.472274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.472282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.477117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.477139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.383 [2024-10-14 17:47:24.477147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.383 [2024-10-14 17:47:24.481753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.383 [2024-10-14 17:47:24.481774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.481782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.486914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.492940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.492961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.492968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.499014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.499034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.499042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.503943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.503963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.503970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.509034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.509056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.509064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.513556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.513577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.513586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.384 [2024-10-14 17:47:24.518152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.384 [2024-10-14 17:47:24.518173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.384 [2024-10-14 17:47:24.518186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.522889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.522910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.522918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.528116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.528137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.528145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.533008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.533029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.533037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.537658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.537678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.537686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.542318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.542338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.542347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.546825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.546853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.551282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.551303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.551311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.555802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.555822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.644 [2024-10-14 17:47:24.555829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.644 [2024-10-14 17:47:24.560321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.644 [2024-10-14 17:47:24.560342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.560350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.564915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.564935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.569504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.569525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.569533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.574057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.574081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.574089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.578587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.578614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.578622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.583113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.583133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.583141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.587576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.587597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.587610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.592184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.592205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.592213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.596801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.596821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.596833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.601272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.601293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.601302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.605911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.605939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.610546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.610567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.610576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.615076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.615096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.615104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.619763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.619784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.619792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.623004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.623024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.623032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.627239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.627260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.627268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.632350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.632371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.632378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.637450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.637476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.643062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.643083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.643091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.648647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.648668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.648676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.654672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.654693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.654701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.659987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.660009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.665470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.665491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.665500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.671460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.671481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.671489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.677144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.677164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.677172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.682655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.682676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.689660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.689681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.689689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.695144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.695164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.695172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.701143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.645 [2024-10-14 17:47:24.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.645 [2024-10-14 17:47:24.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.645 [2024-10-14 17:47:24.706496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.706518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.706526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.712414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.712436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.712444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.718514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.718535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.718543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.725403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.725425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.725432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.733102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.733123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.733132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.739734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.739755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.739766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.746379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.746401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.746410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.646 5769.00 IOPS, 721.12 MiB/s [2024-10-14T15:47:24.784Z] [2024-10-14 17:47:24.753130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.753152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.753160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.758803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.758824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.758833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.764080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.764101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.764109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.768770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.768792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.768800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.773279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.773300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.773308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.777814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.777835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.646 [2024-10-14 17:47:24.782326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.646 [2024-10-14 17:47:24.782347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.646 [2024-10-14 17:47:24.782356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.905 [2024-10-14 17:47:24.786887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.905 [2024-10-14 17:47:24.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.905 [2024-10-14 17:47:24.786919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.905 [2024-10-14 17:47:24.791474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.791494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.791503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.796203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.796224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.800648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.800669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.800677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.805190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.805218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.809686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.809707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.814190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.814211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.819608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.819629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.824407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.824427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.824435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.829085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.829105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.829113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.833650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.833670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.833679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.838161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.838183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.838192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.842856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.842878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.842886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.847383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.847403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.847411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.851908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.851928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.851937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.856432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.856452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.856460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.860979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.861007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.865485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.865507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.865519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.870062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.870082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.870090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.874633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.874653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.874660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.879350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.879370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.879378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.884015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.884036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.884044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.888661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.888682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.888690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.893267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.893288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.893296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.898115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.898136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.898144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.903516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.903537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.908697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.908724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.908732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.906 [2024-10-14 17:47:24.915114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.906 [2024-10-14 17:47:24.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.906 [2024-10-14 17:47:24.915144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.922388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.922411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.922419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.929717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.929740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.929748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.936194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.936215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.936223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.941364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.941386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.941394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.946425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.946446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.946455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.951920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.951941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.951950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.956912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.956933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.956941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.961706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.961727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.966409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.966430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.966438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.971800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.971821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.971829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.976786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.976806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.976814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.982246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.982268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.982276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.987923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.987945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.987953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.993464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.993486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.993494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:24.999530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:24.999551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:24.999559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.005916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.005938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.005951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.013377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.013399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.020429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.020451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.027827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.027850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.027858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.036093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.036116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.036124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:25.907 [2024-10-14 17:47:25.044262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:25.907 [2024-10-14 17:47:25.044286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.907 [2024-10-14 17:47:25.044294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.050087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.050108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.050117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.057299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.057328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.063771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.063793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.063801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.068982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.069004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.069012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.074472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.074493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.074502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.080010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.080030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.080038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.085560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.085581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.085590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.090745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.090766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.090775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.095970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.095992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.096000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.102141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.102165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.102173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.107815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.107836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.113281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.113304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.113315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.118915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.118941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.118949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.124150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.124172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.124180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.129595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.129622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.135369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.135399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.140778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.140799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.140808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.145078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.145098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.145109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.148421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.148442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.148450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.151778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.151797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.151805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.156403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.156427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.156435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.161578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.161598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.161613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.166727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.166747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.166756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.171312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.171333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.171341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.175912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.175932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.175940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.180488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.180509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.180517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.185190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.168 [2024-10-14 17:47:25.185210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.168 [2024-10-14 17:47:25.185218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.168 [2024-10-14 17:47:25.190067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.190088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.190096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.196686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.196708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.196717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.203359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.203381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.203389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.210101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.210122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.210130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.217288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.217308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.217317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.224690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.224711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.224719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.231892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.231913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.238903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.238925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.238932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.245998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.246020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.246028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.253122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.253143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.253151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.260225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.260247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.260260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.267282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.267303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.267310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.275032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.275054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.275063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.282178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.282199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.282208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.289415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.289438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.296609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.296631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.296639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.169 [2024-10-14 17:47:25.303757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.169 [2024-10-14 17:47:25.303778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.169 [2024-10-14 17:47:25.303786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.429 [2024-10-14 17:47:25.310931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.310952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.310976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.318511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.318532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.318540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.324037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.324062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.324070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.328547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.328567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.328575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.332997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.333017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.337420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.337440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.337448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.341965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.341985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.341993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.346385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.346405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.346413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.350951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.350970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.350978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.355460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.355481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.355488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.360042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.360063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.360071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.364552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.364572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.364580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.369082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.369103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.369110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.373612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.373633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.373640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.378112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.378132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.378139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.383645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.383666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.383673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.388732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.388754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.388763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.394563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.394584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.394592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.400102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.400124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.406032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.406053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.406064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.411391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.411412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.411420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.416817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.416847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.422629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.422651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.422660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.427577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.427598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.427612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.432830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.432850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.432858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.438474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.438495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.438503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.442264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.442284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.442292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.448582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.448609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.430 [2024-10-14 17:47:25.448618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.430 [2024-10-14 17:47:25.455822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.430 [2024-10-14 17:47:25.455847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.455855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.463264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.463285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.463294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.470643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.470664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.470673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.478616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.478636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.478645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.485774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.485794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.493077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.493098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.493106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.500334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.500355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.500363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.507702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.507723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.507732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.515096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.515117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.515125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.523559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.523580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.523588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.531382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.531404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.539459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.539481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.539489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.547072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.547095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.547104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.554235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.554256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.554265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.431 [2024-10-14 17:47:25.561904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.431 [2024-10-14 17:47:25.561926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.431 [2024-10-14 17:47:25.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.569372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.569394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.569402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.576410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.576439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.583772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.583793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.583805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.591200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.591223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.591231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.598876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.598897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.598905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.605457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.605479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.605487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.610495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.610515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.610523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.615128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.615149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.615157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.619659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.619679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.619688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.624188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.624209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.624217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.628694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.628714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.691 [2024-10-14 17:47:25.633166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.691 [2024-10-14 17:47:25.633187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.691 [2024-10-14 17:47:25.633194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.637683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.637703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.637711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.642222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.642251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.646817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.646837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.646844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.652310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.652331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.652339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.656854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.656874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.656882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.661380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.661400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.661407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.665843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.665863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.665871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.670440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.670460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.670471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.674938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.674958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.679488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.679508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.679516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.684097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.684118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.684126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.688716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.688736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.688744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.693290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.693310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.693318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.697919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.697939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.697946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.702931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.702951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.702960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.708808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.708829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.708836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.714006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.714031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.718586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.718612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.718620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.723105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.723125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.723133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.727681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.727708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.732288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.732311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.732319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.736879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.736900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.736907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.741418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.741438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.741445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.692 [2024-10-14 17:47:25.745969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.745989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.745997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:26.692 5667.00 IOPS, 708.38 MiB/s [2024-10-14T15:47:25.830Z] [2024-10-14 17:47:25.751817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e8600) 00:30:26.692 [2024-10-14 17:47:25.751838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.692 [2024-10-14 17:47:25.751847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.692 00:30:26.692 Latency(us) 00:30:26.692 [2024-10-14T15:47:25.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.692 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:26.692 nvme0n1 : 2.00 5667.49 708.44 0.00 0.00 2820.13 546.13 8363.64 00:30:26.692 [2024-10-14T15:47:25.830Z] =================================================================================================================== 00:30:26.692 [2024-10-14T15:47:25.830Z] Total : 5667.49 708.44 0.00 0.00 2820.13 546.13 8363.64 00:30:26.692 { 00:30:26.692 "results": [ 00:30:26.692 { 00:30:26.692 "job": "nvme0n1", 00:30:26.692 "core_mask": "0x2", 00:30:26.692 "workload": "randread", 00:30:26.692 "status": "finished", 00:30:26.692 "queue_depth": 16, 00:30:26.692 "io_size": 131072, 00:30:26.692 "runtime": 2.002649, 00:30:26.692 "iops": 5667.493404985097, 00:30:26.692 "mibps": 708.4366756231371, 00:30:26.692 "io_failed": 0, 00:30:26.692 "io_timeout": 0, 00:30:26.692 "avg_latency_us": 2820.1309710929304, 00:30:26.692 "min_latency_us": 546.1333333333333, 00:30:26.692 "max_latency_us": 8363.641904761906 00:30:26.693 } 00:30:26.693 ], 00:30:26.693 "core_count": 1 00:30:26.693 } 00:30:26.693 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:26.693 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:26.693 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:26.693 | .driver_specific 00:30:26.693 | .nvme_error 00:30:26.693 | .status_code 00:30:26.693 | .command_transient_transport_error' 00:30:26.693 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255500 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1255500 ']' 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1255500 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.952 17:47:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255500 00:30:26.952 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:26.952 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:26.952 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255500' 00:30:26.952 killing process with pid 1255500 00:30:26.952 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1255500 00:30:26.952 Received shutdown signal, test time was about 2.000000 seconds 00:30:26.952 00:30:26.952 Latency(us) 00:30:26.952 [2024-10-14T15:47:26.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.952 [2024-10-14T15:47:26.090Z] =================================================================================================================== 00:30:26.952 [2024-10-14T15:47:26.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.952 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1255500 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255975 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255975 /var/tmp/bperf.sock 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1255975 ']' 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:27.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.210 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.210 [2024-10-14 17:47:26.218982] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:27.210 [2024-10-14 17:47:26.219032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255975 ] 00:30:27.211 [2024-10-14 17:47:26.288921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.211 [2024-10-14 17:47:26.327456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.470 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.470 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:27.470 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.470 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.729 17:47:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.988 nvme0n1 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:27.988 17:47:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.247 Running I/O for 2 seconds... 00:30:28.247 [2024-10-14 17:47:27.162866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ee5c8 00:30:28.247 [2024-10-14 17:47:27.163654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.163684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.172110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eea00 00:30:28.247 [2024-10-14 17:47:27.173027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.173050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.181364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6300 00:30:28.247 [2024-10-14 17:47:27.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.181811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.189947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f2d80 00:30:28.247 [2024-10-14 17:47:27.190698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.190718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.199530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e49b0 00:30:28.247 [2024-10-14 17:47:27.200446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.200465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.208868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eff18 00:30:28.247 [2024-10-14 17:47:27.209301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.209321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.219884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e9e10 00:30:28.247 [2024-10-14 17:47:27.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.247 [2024-10-14 17:47:27.221294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:28.247 [2024-10-14 17:47:27.229551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fa3a0 00:30:28.247 [2024-10-14 17:47:27.231059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.231078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.236072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e4de8 00:30:28.248 [2024-10-14 17:47:27.236734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.236753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.245671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f35f0 00:30:28.248 [2024-10-14 17:47:27.246457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.246476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.255039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fbcf0 00:30:28.248 [2024-10-14 17:47:27.255569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.255587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.265450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f5378 00:30:28.248 [2024-10-14 17:47:27.266843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.266860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.272002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e9e10 00:30:28.248 [2024-10-14 17:47:27.272649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.272668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.283346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f96f8 00:30:28.248 [2024-10-14 17:47:27.284500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.284519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.292867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f4298 00:30:28.248 [2024-10-14 17:47:27.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.294198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.301177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9f68 00:30:28.248 [2024-10-14 17:47:27.302433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.302452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.309561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ec840 00:30:28.248 [2024-10-14 17:47:27.310146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.310167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.317955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166df988 00:30:28.248 [2024-10-14 17:47:27.318671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.318689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.329057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e95a0 00:30:28.248 [2024-10-14 17:47:27.330126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.330145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.338219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e99d8 00:30:28.248 [2024-10-14 17:47:27.339385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.339404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.347575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e01f8 00:30:28.248 [2024-10-14 17:47:27.348736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.348755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.355925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e95a0 00:30:28.248 [2024-10-14 17:47:27.356811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.356829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.365298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ed0b0 00:30:28.248 [2024-10-14 17:47:27.366228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.366246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.376800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e9168 00:30:28.248 [2024-10-14 17:47:27.378203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.248 [2024-10-14 17:47:27.378221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.248 [2024-10-14 17:47:27.386376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eaef0 00:30:28.508 [2024-10-14 17:47:27.387935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.387953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.392881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f7970 00:30:28.508 [2024-10-14 17:47:27.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.393623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.401454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e5ec8 00:30:28.508 [2024-10-14 17:47:27.402160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.402178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.412395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ebfd0 00:30:28.508 [2024-10-14 17:47:27.413370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.413388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.421560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ebb98 00:30:28.508 [2024-10-14 17:47:27.422762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.422780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.431089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1710 00:30:28.508 [2024-10-14 17:47:27.432433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.432451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.440702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166df988 00:30:28.508 [2024-10-14 17:47:27.442144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.442163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.449953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f5be8 00:30:28.508 [2024-10-14 17:47:27.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.451378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.457819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6300 00:30:28.508 [2024-10-14 17:47:27.458684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.458701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.466712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ebb98 00:30:28.508 [2024-10-14 17:47:27.467571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.467589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.475999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e5a90 00:30:28.508 [2024-10-14 17:47:27.477113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.477132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.486257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eaef0 00:30:28.508 [2024-10-14 17:47:27.487814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.487831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.492706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e4de8 00:30:28.508 [2024-10-14 17:47:27.493434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.493452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.502954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e9e10 00:30:28.508 [2024-10-14 17:47:27.504186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.511380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ff3c8 00:30:28.508 [2024-10-14 17:47:27.512142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.512160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.520092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166dfdc0 00:30:28.508 [2024-10-14 17:47:27.520916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.520934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.529231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f81e0 00:30:28.508 [2024-10-14 17:47:27.530035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.530053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.539772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fa3a0 00:30:28.508 [2024-10-14 17:47:27.540994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.541013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.547163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166edd58 00:30:28.508 [2024-10-14 17:47:27.547777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.555574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f81e0 00:30:28.508 [2024-10-14 17:47:27.556280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.556298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.565029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fef90 00:30:28.508 [2024-10-14 17:47:27.565853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.565871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:28.508 [2024-10-14 17:47:27.575954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8e88 00:30:28.508 [2024-10-14 17:47:27.577171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.508 [2024-10-14 17:47:27.577189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.582699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e7c50 00:30:28.509 [2024-10-14 17:47:27.583392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.583410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.593926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e27f0 00:30:28.509 [2024-10-14 17:47:27.595107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.595125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.603118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e0630 00:30:28.509 [2024-10-14 17:47:27.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.603893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.611916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eaab8 00:30:28.509 [2024-10-14 17:47:27.613035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.613053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.621012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8a50 00:30:28.509 [2024-10-14 17:47:27.621966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.621984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.631211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e3d08 00:30:28.509 [2024-10-14 17:47:27.632681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.632699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.640707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ec408 00:30:28.509 [2024-10-14 17:47:27.642262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.509 [2024-10-14 17:47:27.642279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:28.509 [2024-10-14 17:47:27.647223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fef90 00:30:28.769 [2024-10-14 17:47:27.647966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.647984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.657680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef270 00:30:28.769 [2024-10-14 17:47:27.658857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.658875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.667133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f5378 00:30:28.769 [2024-10-14 17:47:27.668464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.668483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.676810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e4de8 00:30:28.769 [2024-10-14 17:47:27.678264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.678283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.683441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6b70 00:30:28.769 [2024-10-14 17:47:27.684148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.684166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.693019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef270 00:30:28.769 [2024-10-14 17:47:27.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.693874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.702563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de8a8 00:30:28.769 [2024-10-14 17:47:27.703548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.703567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.713832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f3e60 00:30:28.769 [2024-10-14 17:47:27.715304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.715322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.720455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166edd58 00:30:28.769 [2024-10-14 17:47:27.721203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.721221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.729900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166df118 00:30:28.769 [2024-10-14 17:47:27.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.730755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.739323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de038 00:30:28.769 [2024-10-14 17:47:27.740322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.740341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.748510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e88f8 00:30:28.769 [2024-10-14 17:47:27.749108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.749127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.757761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef6a8 00:30:28.769 [2024-10-14 17:47:27.758547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.769 [2024-10-14 17:47:27.758566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.769 [2024-10-14 17:47:27.767282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f6cc8 00:30:28.770 [2024-10-14 17:47:27.768328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.768347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.776574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb048 00:30:28.770 [2024-10-14 17:47:27.777371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.784850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f46d0 00:30:28.770 [2024-10-14 17:47:27.785647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.794344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb480 00:30:28.770 [2024-10-14 17:47:27.795227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.795245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.805246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de038 00:30:28.770 [2024-10-14 17:47:27.806762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.806780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.811905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe2e8 00:30:28.770 [2024-10-14 17:47:27.812678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.812697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.823139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eee38 00:30:28.770 [2024-10-14 17:47:27.824375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.824394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.832262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e38d0 00:30:28.770 [2024-10-14 17:47:27.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.833467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.840950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f1430 00:30:28.770 [2024-10-14 17:47:27.842053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.842072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.849986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ee5c8 00:30:28.770 [2024-10-14 17:47:27.850931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.850950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.858397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f0bc0 00:30:28.770 [2024-10-14 17:47:27.859340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.859364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.867870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ec840 00:30:28.770 [2024-10-14 17:47:27.868948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.868966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.877300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eee38 00:30:28.770 [2024-10-14 17:47:27.878612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.878630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.885792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e2c28 00:30:28.770 [2024-10-14 17:47:27.886964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.886982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.895072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e8d30 00:30:28.770 [2024-10-14 17:47:27.896044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.896062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:28.770 [2024-10-14 17:47:27.904525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e3d08 00:30:28.770 [2024-10-14 17:47:27.905713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.770 [2024-10-14 17:47:27.905732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.913905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fda78 00:30:29.029 [2024-10-14 17:47:27.914618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.914637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.922526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166edd58 00:30:29.029 [2024-10-14 17:47:27.923902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.923921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.932196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e2c28 00:30:29.029 [2024-10-14 17:47:27.932991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.933010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.941614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f6cc8 00:30:29.029 [2024-10-14 17:47:27.942576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.942595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.949972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ebb98 00:30:29.029 [2024-10-14 17:47:27.951014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.951033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.961199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fa7d8 00:30:29.029 [2024-10-14 17:47:27.962709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.962727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.967892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166dfdc0 00:30:29.029 [2024-10-14 17:47:27.968693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.968711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.979084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e2c28 00:30:29.029 [2024-10-14 17:47:27.980352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.980371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.988545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e9e10 00:30:29.029 [2024-10-14 17:47:27.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.989918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:27.995032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fd640 00:30:29.029 [2024-10-14 17:47:27.995652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:27.995671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:28.004522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e27f0 00:30:29.029 [2024-10-14 17:47:28.005379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.029 [2024-10-14 17:47:28.005397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:29.029 [2024-10-14 17:47:28.013991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fda78 00:30:29.029 [2024-10-14 17:47:28.014921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.014940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.022813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e3498 00:30:29.030 [2024-10-14 17:47:28.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.023573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.031897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ecc78 00:30:29.030 [2024-10-14 17:47:28.032649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.032668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.041273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de038 00:30:29.030 [2024-10-14 17:47:28.042123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.042141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.049684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e84c0 00:30:29.030 [2024-10-14 17:47:28.050441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.050459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.058877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6738 00:30:29.030 [2024-10-14 17:47:28.059621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.059639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.067818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f35f0 00:30:29.030 [2024-10-14 17:47:28.068663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.068681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.077848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ec840 00:30:29.030 [2024-10-14 17:47:28.078842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.078860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.086920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9b30 00:30:29.030 [2024-10-14 17:47:28.087935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.087953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.095372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166edd58 00:30:29.030 [2024-10-14 17:47:28.096287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.096305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.104853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eff18 00:30:29.030 [2024-10-14 17:47:28.105959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.105980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.113396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f6458 00:30:29.030 [2024-10-14 17:47:28.114255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.114274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.122610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e8d30 00:30:29.030 [2024-10-14 17:47:28.123423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.123442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.132058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f4f40 00:30:29.030 [2024-10-14 17:47:28.132990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.133009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.141511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fef90 00:30:29.030 [2024-10-14 17:47:28.142677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.142696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.150690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ddc00 00:30:29.030 [2024-10-14 17:47:28.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.152392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:29.030 27766.00 IOPS, 108.46 MiB/s [2024-10-14T15:47:28.168Z] [2024-10-14 17:47:28.159963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de8a8 00:30:29.030 [2024-10-14 17:47:28.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.030 [2024-10-14 17:47:28.160849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.030 [2024-10-14 17:47:28.169211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ebfd0 00:30:29.290 [2024-10-14 17:47:28.170130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.178438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fa3a0 00:30:29.290 [2024-10-14 17:47:28.179372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.179390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.187666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ed4e8 00:30:29.290 [2024-10-14 17:47:28.188573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.188593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.196817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f7970 00:30:29.290 [2024-10-14 17:47:28.197736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.197755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.206040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ea680 00:30:29.290 [2024-10-14 17:47:28.206966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.206984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.215233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e49b0 00:30:29.290 [2024-10-14 17:47:28.216131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.216150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.224435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6738 00:30:29.290 [2024-10-14 17:47:28.225317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.233549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e7c50 00:30:29.290 [2024-10-14 17:47:28.234473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.234492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.242691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e95a0 00:30:29.290 [2024-10-14 17:47:28.243582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.243605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.251765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb048 00:30:29.290 [2024-10-14 17:47:28.252672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.252690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.260844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f1868 00:30:29.290 [2024-10-14 17:47:28.261717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.269850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e5a90 00:30:29.290 [2024-10-14 17:47:28.270745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.270763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.278880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ec840 00:30:29.290 [2024-10-14 17:47:28.279815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.279834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.287903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f2510 00:30:29.290 [2024-10-14 17:47:28.288800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.288818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.296883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e0630 00:30:29.290 [2024-10-14 17:47:28.297679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.305882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f57b0 00:30:29.290 [2024-10-14 17:47:28.306690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.306709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.314994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f96f8 00:30:29.290 [2024-10-14 17:47:28.315808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.315826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.324224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fa7d8 00:30:29.290 [2024-10-14 17:47:28.325038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.325056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.333343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eaef0 00:30:29.290 [2024-10-14 17:47:28.334253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.334272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.342445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f4f40 00:30:29.290 [2024-10-14 17:47:28.343353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.343376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.351519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f0ff8 00:30:29.290 [2024-10-14 17:47:28.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.360571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f0350 00:30:29.290 [2024-10-14 17:47:28.361504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.361523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.369563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1f80 00:30:29.290 [2024-10-14 17:47:28.370382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.370401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.378582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6300 00:30:29.290 [2024-10-14 17:47:28.379452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.379470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.387009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166edd58 00:30:29.290 [2024-10-14 17:47:28.387875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.397046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fdeb0 00:30:29.290 [2024-10-14 17:47:28.398029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.290 [2024-10-14 17:47:28.398048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.290 [2024-10-14 17:47:28.406075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eea00 00:30:29.290 [2024-10-14 17:47:28.407087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.291 [2024-10-14 17:47:28.407106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.291 [2024-10-14 17:47:28.415103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8618 00:30:29.291 [2024-10-14 17:47:28.416113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.291 [2024-10-14 17:47:28.416131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.291 [2024-10-14 17:47:28.424168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe720 00:30:29.291 [2024-10-14 17:47:28.425212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.291 [2024-10-14 17:47:28.425230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.433473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f0bc0 00:30:29.551 [2024-10-14 17:47:28.434522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.434540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.442662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f20d8 00:30:29.551 [2024-10-14 17:47:28.443702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.443721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.451843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e4140 00:30:29.551 [2024-10-14 17:47:28.452884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.452902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.461079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f3e60 00:30:29.551 [2024-10-14 17:47:28.462099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.462118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.470115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e88f8 00:30:29.551 [2024-10-14 17:47:28.471154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.471172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.479165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e5220 00:30:29.551 [2024-10-14 17:47:28.480203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.480221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.488199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e5658 00:30:29.551 [2024-10-14 17:47:28.489194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.489212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.497215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f7da8 00:30:29.551 [2024-10-14 17:47:28.498208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.498227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.506241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eaab8 00:30:29.551 [2024-10-14 17:47:28.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.507295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.515287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1710 00:30:29.551 [2024-10-14 17:47:28.516342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.516360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.524458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e84c0 00:30:29.551 [2024-10-14 17:47:28.525524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.525543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.533507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe2e8 00:30:29.551 [2024-10-14 17:47:28.534542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.534561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.542776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fda78 00:30:29.551 [2024-10-14 17:47:28.543770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.543789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.551797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f46d0 00:30:29.551 [2024-10-14 17:47:28.552812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.552830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.560856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8a50 00:30:29.551 [2024-10-14 17:47:28.561910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.561928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.569886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166feb58 00:30:29.551 [2024-10-14 17:47:28.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.570905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.578874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eff18 00:30:29.551 [2024-10-14 17:47:28.579867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.579888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.587878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef6a8 00:30:29.551 [2024-10-14 17:47:28.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.588915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.596969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9f68 00:30:29.551 [2024-10-14 17:47:28.597974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.597992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.606009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ecc78 00:30:29.551 [2024-10-14 17:47:28.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.607103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.615086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9b30 00:30:29.551 [2024-10-14 17:47:28.616123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.616141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.624190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e8d30 00:30:29.551 [2024-10-14 17:47:28.625182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.625200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.633170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de470 00:30:29.551 [2024-10-14 17:47:28.634207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.551 [2024-10-14 17:47:28.634225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.551 [2024-10-14 17:47:28.642199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e3060 00:30:29.551 [2024-10-14 17:47:28.643258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.643277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.552 [2024-10-14 17:47:28.651235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f1430 00:30:29.552 [2024-10-14 17:47:28.652284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.652303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.552 [2024-10-14 17:47:28.660327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e0a68 00:30:29.552 [2024-10-14 17:47:28.661328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.661346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.552 [2024-10-14 17:47:28.669338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f6890 00:30:29.552 [2024-10-14 17:47:28.670333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.670352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.552 [2024-10-14 17:47:28.678388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de038 00:30:29.552 [2024-10-14 17:47:28.679424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.679443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.552 [2024-10-14 17:47:28.687463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ed0b0 00:30:29.552 [2024-10-14 17:47:28.688511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.552 [2024-10-14 17:47:28.688530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.696775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fdeb0 00:30:29.811 [2024-10-14 17:47:28.697795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.697813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.705976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166eea00 00:30:29.811 [2024-10-14 17:47:28.707003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.707021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.715169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8618 00:30:29.811 [2024-10-14 17:47:28.716234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.716252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.724286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe720 00:30:29.811 [2024-10-14 17:47:28.725274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.725292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.733558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e8088 00:30:29.811 [2024-10-14 17:47:28.734624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.734642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.742539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb8b8 00:30:29.811 [2024-10-14 17:47:28.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.750939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166dece0 00:30:29.811 [2024-10-14 17:47:28.751840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.751859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.760148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f31b8 00:30:29.811 [2024-10-14 17:47:28.760824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.760842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.768300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f5be8 00:30:29.811 [2024-10-14 17:47:28.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.769114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.778914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8a50 00:30:29.811 [2024-10-14 17:47:28.780275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.780293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.787290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef6a8 00:30:29.811 [2024-10-14 17:47:28.788301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.788319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.796223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9f68 00:30:29.811 [2024-10-14 17:47:28.797200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.797218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.805185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ecc78 00:30:29.811 [2024-10-14 17:47:28.806180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.806199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.814215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9b30 00:30:29.811 [2024-10-14 17:47:28.815220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.815242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.823324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f8618 00:30:29.811 [2024-10-14 17:47:28.824338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.824357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.832342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe720 00:30:29.811 [2024-10-14 17:47:28.833319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.833336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.841363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ed0b0 00:30:29.811 [2024-10-14 17:47:28.842527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.842546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.850575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166de038 00:30:29.811 [2024-10-14 17:47:28.851588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.851610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.859632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fef90 00:30:29.811 [2024-10-14 17:47:28.860638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.811 [2024-10-14 17:47:28.860656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.811 [2024-10-14 17:47:28.868624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fac10 00:30:29.812 [2024-10-14 17:47:28.869599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.869619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.877648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f1ca0 00:30:29.812 [2024-10-14 17:47:28.878644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.878662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.886650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fbcf0 00:30:29.812 [2024-10-14 17:47:28.887666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.887685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.896936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f7970 00:30:29.812 [2024-10-14 17:47:28.898374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.898392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.903323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166df988 00:30:29.812 [2024-10-14 17:47:28.903996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.912626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f2d80 00:30:29.812 [2024-10-14 17:47:28.913302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.913320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.921829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb480 00:30:29.812 [2024-10-14 17:47:28.922464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.922482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.930894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f35f0 00:30:29.812 [2024-10-14 17:47:28.931523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.931541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.939914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ee190 00:30:29.812 [2024-10-14 17:47:28.940574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.940593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:29.812 [2024-10-14 17:47:28.949162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166dece0 00:30:29.812 [2024-10-14 17:47:28.949830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.812 [2024-10-14 17:47:28.949850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:28.958373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f4b08 00:30:30.071 [2024-10-14 17:47:28.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:28.959037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:28.967530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e4578 00:30:30.071 [2024-10-14 17:47:28.968193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:28.968212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:28.976605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f6458 00:30:30.071 [2024-10-14 17:47:28.977267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:28.977286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:28.985647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f0bc0 00:30:30.071 [2024-10-14 17:47:28.986310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:28.986328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:28.994713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e0ea0 00:30:30.071 [2024-10-14 17:47:28.995382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:28.995400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.003737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e73e0 00:30:30.071 [2024-10-14 17:47:29.004411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.004430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.012791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fb8b8 00:30:30.071 [2024-10-14 17:47:29.013489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.013507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.021922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f31b8 00:30:30.071 [2024-10-14 17:47:29.022569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.022587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.030960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e2c28 00:30:30.071 [2024-10-14 17:47:29.031637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.031656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.039998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e38d0 00:30:30.071 [2024-10-14 17:47:29.040640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.040659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.048965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f96f8 00:30:30.071 [2024-10-14 17:47:29.049623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.049644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.057967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f92c0 00:30:30.071 [2024-10-14 17:47:29.058655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.058674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.066992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f1430 00:30:30.071 [2024-10-14 17:47:29.067564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.067583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.076315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1f80 00:30:30.071 [2024-10-14 17:47:29.077103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.077121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.084705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e6738 00:30:30.071 [2024-10-14 17:47:29.085357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.085375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.095081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ee190 00:30:30.071 [2024-10-14 17:47:29.095889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.095907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.103973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1710 00:30:30.071 [2024-10-14 17:47:29.104979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.104997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.113188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166ef270 00:30:30.071 [2024-10-14 17:47:29.114249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.071 [2024-10-14 17:47:29.114267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:30.071 [2024-10-14 17:47:29.122535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e7818 00:30:30.072 [2024-10-14 17:47:29.123089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.072 [2024-10-14 17:47:29.123108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:30.072 [2024-10-14 17:47:29.131263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166f9f68 00:30:30.072 [2024-10-14 17:47:29.132165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.072 [2024-10-14 17:47:29.132183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:30.072 [2024-10-14 17:47:29.140580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166fe2e8 00:30:30.072 [2024-10-14 17:47:29.141483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.072 [2024-10-14 17:47:29.141501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:30.072 [2024-10-14 17:47:29.150611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc85c0) with pdu=0x2000166e1710 00:30:30.072 [2024-10-14 17:47:29.151587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.072 [2024-10-14 17:47:29.151611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:30.072 27949.50 IOPS, 109.18 MiB/s 00:30:30.072 Latency(us) 00:30:30.072 [2024-10-14T15:47:29.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.072 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.072 nvme0n1 : 2.00 27957.86 109.21 0.00 0.00 4574.54 1802.24 12420.63 00:30:30.072 [2024-10-14T15:47:29.210Z] =================================================================================================================== 00:30:30.072 [2024-10-14T15:47:29.210Z] Total : 27957.86 109.21 0.00 0.00 4574.54 1802.24 12420.63 00:30:30.072 { 00:30:30.072 "results": [ 00:30:30.072 { 00:30:30.072 "job": "nvme0n1", 00:30:30.072 "core_mask": "0x2", 00:30:30.072 "workload": "randwrite", 00:30:30.072 "status": "finished", 00:30:30.072 "queue_depth": 128, 00:30:30.072 "io_size": 4096, 00:30:30.072 "runtime": 2.00398, 00:30:30.072 "iops": 27957.863850936636, 00:30:30.072 "mibps": 109.21040566772123, 00:30:30.072 "io_failed": 0, 00:30:30.072 "io_timeout": 0, 00:30:30.072 "avg_latency_us": 4574.544443826828, 00:30:30.072 "min_latency_us": 1802.24, 00:30:30.072 "max_latency_us": 12420.63238095238 00:30:30.072 } 00:30:30.072 ], 00:30:30.072 "core_count": 1 00:30:30.072 } 00:30:30.072 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.072 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.072 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.072 | .driver_specific 00:30:30.072 | .nvme_error 00:30:30.072 | .status_code 00:30:30.072 | .command_transient_transport_error' 00:30:30.072 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255975 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1255975 ']' 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1255975 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255975 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255975' 00:30:30.331 killing process with pid 1255975 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1255975 00:30:30.331 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.331 00:30:30.331 Latency(us) 00:30:30.331 [2024-10-14T15:47:29.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.331 [2024-10-14T15:47:29.469Z] =================================================================================================================== 00:30:30.331 [2024-10-14T15:47:29.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.331 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1255975 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1256666 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1256666 /var/tmp/bperf.sock 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1256666 ']' 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:30.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:30.591 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:30.591 [2024-10-14 17:47:29.617183] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:30.591 [2024-10-14 17:47:29.617231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256666 ] 00:30:30.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:30.591 Zero copy mechanism will not be used. 00:30:30.591 [2024-10-14 17:47:29.686676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.591 [2024-10-14 17:47:29.728608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.850 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.850 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:30:30.850 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:30.850 17:47:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.109 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.368 nvme0n1 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:31.368 17:47:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:31.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:31.628 Zero copy mechanism will not be used. 00:30:31.628 Running I/O for 2 seconds... 00:30:31.628 [2024-10-14 17:47:30.590019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.590271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.590298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.595560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.595755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.602142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.602405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.602428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.608694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.608936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.608957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.614260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.614509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.614531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.619500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.619743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.619764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.624720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.624958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.628 [2024-10-14 17:47:30.624979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.628 [2024-10-14 17:47:30.629621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.628 [2024-10-14 17:47:30.629879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.629899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.635056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.635119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.635137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.640841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.641099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.641120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.646104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.646343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.646362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.651522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.651772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.651792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.656497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.656774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.661206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.661445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.661469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.665839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.666089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.666108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.670431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.670685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.670705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.675160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.675398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.679843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.680079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.680099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.684525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.684767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.684787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.689103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.689355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.689374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.693682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.693919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.693939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.698244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.698384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.698401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.703441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.703688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.703707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.708583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.708841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.708862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.713330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.713563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.713583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.717919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.718153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.722512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.722766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.722786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.727084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.727321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.727341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.731755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.731992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.732012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.736291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.629 [2024-10-14 17:47:30.736528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.629 [2024-10-14 17:47:30.736547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.629 [2024-10-14 17:47:30.741029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.630 [2024-10-14 17:47:30.745662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.745900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.745921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.630 [2024-10-14 17:47:30.750370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.750624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.630 [2024-10-14 17:47:30.755875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.756104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.630 [2024-10-14 17:47:30.760981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.761226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.761246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.630 [2024-10-14 17:47:30.766375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.630 [2024-10-14 17:47:30.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.630 [2024-10-14 17:47:30.766635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.890 [2024-10-14 17:47:30.771013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.890 [2024-10-14 17:47:30.771241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.890 [2024-10-14 17:47:30.771260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.775759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.776020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.776040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.780408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.784877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.785128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.785148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.789292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.789542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.789562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.793662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.793896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.798054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.798303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.798323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.802459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.802699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.802718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.807056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.807286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.807306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.811442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.811699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.811722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.815870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.816107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.816128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.820302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.820550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.820570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.824704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.824962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.824982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.829396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.829630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.829666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.833929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.834177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.834197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.838479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.838735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.838756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.843120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.843381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.843406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.848109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.848345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.853353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.853593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.853619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.858766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.859026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.859046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.864179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.864427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.864451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.869572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.869835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.869856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.875290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.875539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.875558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.880777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.881010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.881029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.886172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.886241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.886259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.892582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.892842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.892872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.897593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.897850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.897869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.903007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.903238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.903257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.891 [2024-10-14 17:47:30.908091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.891 [2024-10-14 17:47:30.908330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.891 [2024-10-14 17:47:30.908350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.912834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.913072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.917369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.917628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.921847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.922080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.922100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.926179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.926408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.926428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.931365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.931594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.931621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.937445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.937699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.937719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.943148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.943380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.943401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.949451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.949717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.949737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.955261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.955493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.955513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.959985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.960216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.960236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.964506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.964743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.964763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.969186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.969414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.969434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.973492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.973738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.973757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.977803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.978055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.978075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.982049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.982279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.982298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.986249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.986480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.986500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.990696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.990930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.990951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:30.996403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:30.996647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:30.996671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:31.002699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:31.002957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:31.002977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:31.008864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:31.009100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:31.009120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:31.015272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:31.015367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:31.015384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:31.022491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:31.022755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:31.022776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.892 [2024-10-14 17:47:31.028156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:31.892 [2024-10-14 17:47:31.028402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.892 [2024-10-14 17:47:31.028422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.032965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.033195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.033214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.038182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.038422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.038442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.043013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.043250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.043271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.047348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.047607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.047628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.051658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.051917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.051936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.056004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.056236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.056256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.060916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.061147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.067118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.067423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.067443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.153 [2024-10-14 17:47:31.072331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.153 [2024-10-14 17:47:31.072535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.153 [2024-10-14 17:47:31.072554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.077375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.077579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.077599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.082160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.082369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.082389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.086890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.087113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.087136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.091738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.091968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.091987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.096348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.096569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.100846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.101056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.101076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.105929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.106192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.106212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.111756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.111969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.111990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.116867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.117176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.117196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.122787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.123042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.123062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.128029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.128250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.128270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.132232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.132441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.132461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.136497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.136732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.140887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.141090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.141110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.145659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.145865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.145892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.150870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.151092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.155574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.155807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.155828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.160121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.160322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.160340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.165103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.165306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.165326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.169663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.169868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.154 [2024-10-14 17:47:31.169888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.154 [2024-10-14 17:47:31.174313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.154 [2024-10-14 17:47:31.174512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.174531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.178678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.178882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.178901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.183542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.183770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.183790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.188220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.188442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.188461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.192911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.193114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.193133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.198044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.198265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.198285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.202754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.202975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.202994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.207700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.207922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.207942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.212183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.212405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.212429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.216677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.216887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.216908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.221004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.221224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.221245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.225444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.225671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.229901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.230104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.230124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.234340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.234543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.234562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.238706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.238927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.243159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.243360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.243378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.247486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.247711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.247730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.251710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.251948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.251968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.255910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.256134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.256154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.260098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.260317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.260336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.264273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.264494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.155 [2024-10-14 17:47:31.264514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.155 [2024-10-14 17:47:31.268435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.155 [2024-10-14 17:47:31.268659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.268678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.156 [2024-10-14 17:47:31.272619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.156 [2024-10-14 17:47:31.272841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.272870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.156 [2024-10-14 17:47:31.276948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.156 [2024-10-14 17:47:31.277152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.277171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.156 [2024-10-14 17:47:31.281711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.156 [2024-10-14 17:47:31.281916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.281935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.156 [2024-10-14 17:47:31.286410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.156 [2024-10-14 17:47:31.286619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.286637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.156 [2024-10-14 17:47:31.291635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.156 [2024-10-14 17:47:31.291845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.156 [2024-10-14 17:47:31.291865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.416 [2024-10-14 17:47:31.296433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.296661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.296680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.301506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.301718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.301738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.306164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.306369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.306388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.310629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.310833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.310853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.315417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.315642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.315661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.320189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.320398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.320417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.325546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.325754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.325774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.330151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.330354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.330377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.335332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.335556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.341175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.341389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.341408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.346082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.346288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.346306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.350613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.350834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.350852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.355319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.355527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.359536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.359750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.359770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.363772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.363980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.363999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.368191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.368397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.368417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.373331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.373557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.373576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.379388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.379685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.379705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.385946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.386174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.392650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.392903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.392922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.399341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.399682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.399702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.406554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.406762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.406781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.411849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.412058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.412078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.417328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.417553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.417572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.422054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.422258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.422277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.427062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.427266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.427285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.431489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.431715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.431735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.436373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.436596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-10-14 17:47:31.436623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-10-14 17:47:31.441194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.417 [2024-10-14 17:47:31.441397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.441416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.445946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.446168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.450561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.450769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.450792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.455452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.455678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.455698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.460412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.460645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.460665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.465551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.465761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.465784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.470483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.470694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.470713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.475317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.475520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.475539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.480761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.480992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.481011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.485433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.485661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.490441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.490650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.490668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.495559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.495785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.495805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.500179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.500399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.500419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.505078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.505280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.505299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.510685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.510894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.510913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.515780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.516038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.516057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.521827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.522031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.522050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.526516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.526722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.526740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.531230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.531435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.531454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.536901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.537123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.537142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.542469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.542692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.542712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.548006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.548225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.548244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.418 [2024-10-14 17:47:31.552409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.418 [2024-10-14 17:47:31.552622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.418 [2024-10-14 17:47:31.552646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.556789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.556994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.557014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.561038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.561245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.561264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.565203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.565409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.565428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.569161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.569385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.573057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.573260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.573280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.576953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.577155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.577174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.580911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.581117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.581137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.713 6264.00 IOPS, 783.00 MiB/s [2024-10-14T15:47:31.851Z] [2024-10-14 17:47:31.585852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.586072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.586092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.589860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.590074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.590094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.593829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.594039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.597777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.597986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.598005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.601691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.601901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.601922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.605635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.605845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.605865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.609569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.609787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.609807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.613562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.613774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.617464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.617679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.621420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.621653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.621673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.625384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.625620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.629324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.629533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.629553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.633238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.633459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.633479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.637196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.637403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.713 [2024-10-14 17:47:31.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.713 [2024-10-14 17:47:31.641150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.713 [2024-10-14 17:47:31.641357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.641377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.645464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.645672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.645691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.651191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.651503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.651522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.656399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.656612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.656631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.660839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.661068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.661090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.665434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.665675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.665695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.670202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.670405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.670425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.674239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.674442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.674461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.678301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.678505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.678523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.682321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.682524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.682543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.686354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.686556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.686574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.690638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.690842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.690861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.694957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.695161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.695180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.699040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.699267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.699287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.703149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.703352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.703371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.707708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.707935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.707954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.713530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.713817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.713838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.719642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.719924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.719944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.726567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.726803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.726823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.732351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.732573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.732593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.737167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.737413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.742126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.742329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.742348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.746775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.747017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.747036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.752312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.752514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.752533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.757127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.757349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.757368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.761975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.762178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.762197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.767399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.767608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.767627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.772088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.772292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.772312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.714 [2024-10-14 17:47:31.776984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.714 [2024-10-14 17:47:31.777206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.714 [2024-10-14 17:47:31.777226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.781945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.782148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.782167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.786452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.786665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.791176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.791379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.791398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.796176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.796382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.796401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.800933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.801157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.801176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.805756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.805986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.806006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.810233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.810436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.810455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.814639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.814849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.814869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.819109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.819311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.823897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.824116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.824137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.828687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.828909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.828928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.833433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.833662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.833682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.839395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.839675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.839694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.715 [2024-10-14 17:47:31.846871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.715 [2024-10-14 17:47:31.847155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.715 [2024-10-14 17:47:31.847175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.853451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.853727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.859921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.860168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.860188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.867033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.867283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.867303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.873686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.873979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.880513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.880752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.880777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.887279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.887525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.887545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.894081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.894386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.900985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.901259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.901279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.908039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.908343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.908363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.914807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.915083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.915104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.921784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.922087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.929134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.975 [2024-10-14 17:47:31.929442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.975 [2024-10-14 17:47:31.929462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.975 [2024-10-14 17:47:31.935237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.935444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.935464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.940527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.940778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.945817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.946057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.946076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.951549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.951778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.951798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.957012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.957245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.957264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.961685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.961889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.961907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.965826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.966030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.966050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.969845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.970049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.970073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.973968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.974170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.974190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.977994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.978219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.978239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.982085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.982289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.982308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.986154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.986355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.986375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.990170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.990374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.990393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.994248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.994450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.994469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:31.998253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:31.998460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:31.998479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.002355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.002561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.002580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.006373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.006578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.006598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.010459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.010671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.010689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.014494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.014722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.014746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.018549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.018779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.022644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.022850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.022870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.026644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.026847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.026867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.030713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.030918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.030939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.034725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.034930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.038749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.038973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.038993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.042848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.043053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.043072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.046895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.047121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.050973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.051186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.051205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.976 [2024-10-14 17:47:32.055007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.976 [2024-10-14 17:47:32.055213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.976 [2024-10-14 17:47:32.055233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.059088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.059294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.059313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.063148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.063358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.063378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.067277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.067487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.067507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.071282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.071485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.071504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.075356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.075559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.079395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.079606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.079624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.083444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.083672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.083704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.087834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.088054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.093528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.093813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.093834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.099641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.099909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.105849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.106171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.106191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.977 [2024-10-14 17:47:32.112217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:32.977 [2024-10-14 17:47:32.112530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.977 [2024-10-14 17:47:32.112550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.118369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.118631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.124305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.124599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.124629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.130284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.130569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.130589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.136469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.136769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.136792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.142467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.142732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.142752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.148393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.148701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.148721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.154561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.154862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.154882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.160942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.161252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.161272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.166888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.237 [2024-10-14 17:47:32.167185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.237 [2024-10-14 17:47:32.167204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.237 [2024-10-14 17:47:32.173199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.173445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.173465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.179492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.179768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.179788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.184533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.184767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.184786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.188877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.189083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.189104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.193592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.193809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.193828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.198219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.198424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.198442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.203055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.203259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.203278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.207720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.207964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.212505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.212732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.212752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.217174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.217407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.217428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.221860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.222088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.222108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.226956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.227161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.227185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.231676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.231881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.231901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.236170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.236390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.236411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.240860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.241064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.241085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.245291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.245519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.245539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.249754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.249960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.249980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.254138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.254361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.258517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.258728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.258754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.262813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.263018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.263037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.268429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.268769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.268789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.274491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.274708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.274727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.279289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.279500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.279520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.284017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.284221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.284240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.289279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.289502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.289522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.294320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.294523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.294543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.299230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.299434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.299453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.304159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.304363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.304383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.238 [2024-10-14 17:47:32.309087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.238 [2024-10-14 17:47:32.309294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.238 [2024-10-14 17:47:32.309314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.314011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.314235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.318644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.318875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.318894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.323186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.323389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.323409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.327860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.328065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.328085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.332660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.332864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.332884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.337099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.337305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.341421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.341646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.341666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.346236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.346478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.351989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.352191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.352214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.356529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.356755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.356775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.361118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.361329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.361350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.365480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.365695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.369821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.370028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.370047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.239 [2024-10-14 17:47:32.374241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.239 [2024-10-14 17:47:32.374451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.239 [2024-10-14 17:47:32.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.378675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.378907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.383352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.383561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.387621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.387845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.387865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.392073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.392284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.392304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.396523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.396735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.396755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.401422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.401651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.401670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.406553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.406784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.406803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.410879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.411101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.411121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.415252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.415475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.415496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.420298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.420521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.499 [2024-10-14 17:47:32.420541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.499 [2024-10-14 17:47:32.426105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.499 [2024-10-14 17:47:32.426394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.426414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.432305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.432528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.432549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.437933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.438145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.438165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.443938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.444160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.444180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.450204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.450449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.450469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.457086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.457378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.457397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.464117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.464402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.464423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.471961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.472278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.472299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.479368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.479590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.479617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.486635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.486919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.486939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.493912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.500963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.501240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.501261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.507938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.508255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.515865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.522464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.522701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.522727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.528675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.528927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.528947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.534620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.534875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.534895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.541350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.541643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.541663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.547000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.547203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.547223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.552442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.552737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.552757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.558375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.558639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.563219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.563422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.563442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.568132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.568337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.568356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.573121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.573327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.573347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.578116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.578322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.500 [2024-10-14 17:47:32.583613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc8900) with pdu=0x2000166fef90 00:30:33.500 [2024-10-14 17:47:32.583840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.500 [2024-10-14 17:47:32.583860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.500 6200.50 IOPS, 775.06 MiB/s 00:30:33.500 Latency(us) 00:30:33.500 [2024-10-14T15:47:32.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.500 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:33.500 nvme0n1 : 2.00 6199.12 774.89 0.00 0.00 2577.09 1669.61 8613.30 00:30:33.500 [2024-10-14T15:47:32.638Z] =================================================================================================================== 00:30:33.500 [2024-10-14T15:47:32.638Z] Total : 6199.12 774.89 0.00 0.00 2577.09 1669.61 8613.30 00:30:33.500 { 00:30:33.500 "results": [ 00:30:33.500 { 00:30:33.500 "job": "nvme0n1", 00:30:33.500 "core_mask": "0x2", 00:30:33.500 "workload": "randwrite", 00:30:33.500 "status": "finished", 00:30:33.500 "queue_depth": 16, 00:30:33.500 "io_size": 131072, 00:30:33.500 "runtime": 2.00351, 00:30:33.500 "iops": 6199.120543446252, 00:30:33.500 "mibps": 774.8900679307815, 00:30:33.500 "io_failed": 0, 00:30:33.500 "io_timeout": 0, 00:30:33.501 "avg_latency_us": 2577.0862970631088, 00:30:33.501 "min_latency_us": 1669.607619047619, 00:30:33.501 "max_latency_us": 8613.302857142857 00:30:33.501 } 00:30:33.501 ], 00:30:33.501 "core_count": 1 00:30:33.501 } 00:30:33.501 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:33.501 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:33.501 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:33.501 | .driver_specific 00:30:33.501 | .nvme_error 00:30:33.501 | .status_code 00:30:33.501 | .command_transient_transport_error' 00:30:33.501 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 400 > 0 )) 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1256666 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1256666 ']' 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1256666 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1256666 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1256666' 00:30:33.760 killing process with pid 1256666 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1256666 00:30:33.760 Received shutdown signal, test time was about 2.000000 seconds 00:30:33.760 00:30:33.760 Latency(us) 00:30:33.760 [2024-10-14T15:47:32.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.760 [2024-10-14T15:47:32.898Z] =================================================================================================================== 00:30:33.760 [2024-10-14T15:47:32.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:33.760 17:47:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1256666 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1254789 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1254789 ']' 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1254789 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1254789 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1254789' 00:30:34.019 killing process with pid 1254789 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1254789 00:30:34.019 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1254789 00:30:34.278 00:30:34.279 real 0m14.102s 00:30:34.279 user 0m27.066s 00:30:34.279 sys 0m4.459s 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.279 ************************************ 00:30:34.279 END TEST nvmf_digest_error 00:30:34.279 ************************************ 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.279 rmmod nvme_tcp 00:30:34.279 rmmod nvme_fabrics 00:30:34.279 rmmod nvme_keyring 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1254789 ']' 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1254789 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1254789 ']' 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1254789 00:30:34.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1254789) - No such process 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1254789 is not found' 00:30:34.279 Process with pid 1254789 is not found 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.279 17:47:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.815 00:30:36.815 real 0m36.206s 00:30:36.815 user 0m55.020s 00:30:36.815 sys 0m13.648s 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:36.815 ************************************ 00:30:36.815 END TEST nvmf_digest 00:30:36.815 ************************************ 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.815 ************************************ 00:30:36.815 START TEST nvmf_bdevperf 00:30:36.815 ************************************ 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:36.815 * Looking for test storage... 00:30:36.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.815 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.816 --rc genhtml_branch_coverage=1 00:30:36.816 --rc genhtml_function_coverage=1 00:30:36.816 --rc genhtml_legend=1 00:30:36.816 --rc geninfo_all_blocks=1 00:30:36.816 --rc geninfo_unexecuted_blocks=1 00:30:36.816 00:30:36.816 ' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.816 --rc genhtml_branch_coverage=1 00:30:36.816 --rc genhtml_function_coverage=1 00:30:36.816 --rc genhtml_legend=1 00:30:36.816 --rc geninfo_all_blocks=1 00:30:36.816 --rc geninfo_unexecuted_blocks=1 00:30:36.816 00:30:36.816 ' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.816 --rc genhtml_branch_coverage=1 00:30:36.816 --rc genhtml_function_coverage=1 00:30:36.816 --rc genhtml_legend=1 00:30:36.816 --rc geninfo_all_blocks=1 00:30:36.816 --rc geninfo_unexecuted_blocks=1 00:30:36.816 00:30:36.816 ' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.816 --rc genhtml_branch_coverage=1 00:30:36.816 --rc genhtml_function_coverage=1 00:30:36.816 --rc genhtml_legend=1 00:30:36.816 --rc geninfo_all_blocks=1 00:30:36.816 --rc geninfo_unexecuted_blocks=1 00:30:36.816 00:30:36.816 ' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.816 17:47:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:43.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:43.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:43.387 Found net devices under 0000:86:00.0: cvl_0_0 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:43.387 Found net devices under 0000:86:00.1: cvl_0_1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.387 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:30:43.388 00:30:43.388 --- 10.0.0.2 ping statistics --- 00:30:43.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.388 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:43.388 00:30:43.388 --- 10.0.0.1 ping statistics --- 00:30:43.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.388 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1260669 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1260669 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1260669 ']' 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 [2024-10-14 17:47:41.629041] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:43.388 [2024-10-14 17:47:41.629085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.388 [2024-10-14 17:47:41.701996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.388 [2024-10-14 17:47:41.745207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.388 [2024-10-14 17:47:41.745239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.388 [2024-10-14 17:47:41.745246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.388 [2024-10-14 17:47:41.745253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.388 [2024-10-14 17:47:41.745259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.388 [2024-10-14 17:47:41.746634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.388 [2024-10-14 17:47:41.746741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.388 [2024-10-14 17:47:41.746741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 [2024-10-14 17:47:41.890695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 Malloc0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.388 [2024-10-14 17:47:41.957554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:43.388 { 00:30:43.388 "params": { 00:30:43.388 "name": "Nvme$subsystem", 00:30:43.388 "trtype": "$TEST_TRANSPORT", 00:30:43.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.388 "adrfam": "ipv4", 00:30:43.388 "trsvcid": "$NVMF_PORT", 00:30:43.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.388 "hdgst": ${hdgst:-false}, 00:30:43.388 "ddgst": ${ddgst:-false} 00:30:43.388 }, 00:30:43.388 "method": "bdev_nvme_attach_controller" 00:30:43.388 } 00:30:43.388 EOF 00:30:43.388 )") 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:30:43.388 17:47:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:43.388 "params": { 00:30:43.388 "name": "Nvme1", 00:30:43.388 "trtype": "tcp", 00:30:43.388 "traddr": "10.0.0.2", 00:30:43.388 "adrfam": "ipv4", 00:30:43.388 "trsvcid": "4420", 00:30:43.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.388 "hdgst": false, 00:30:43.388 "ddgst": false 00:30:43.388 }, 00:30:43.388 "method": "bdev_nvme_attach_controller" 00:30:43.388 }' 00:30:43.388 [2024-10-14 17:47:42.009044] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:43.388 [2024-10-14 17:47:42.009087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260697 ] 00:30:43.388 [2024-10-14 17:47:42.078309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.388 [2024-10-14 17:47:42.119103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.388 Running I/O for 1 seconds... 00:30:44.765 11383.00 IOPS, 44.46 MiB/s 00:30:44.765 Latency(us) 00:30:44.765 [2024-10-14T15:47:43.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:44.765 Verification LBA range: start 0x0 length 0x4000 00:30:44.765 Nvme1n1 : 1.02 11463.21 44.78 0.00 0.00 11125.45 2340.57 15791.06 00:30:44.765 [2024-10-14T15:47:43.903Z] =================================================================================================================== 00:30:44.765 [2024-10-14T15:47:43.903Z] Total : 11463.21 44.78 0.00 0.00 11125.45 2340.57 15791.06 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1260950 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:44.765 { 00:30:44.765 "params": { 00:30:44.765 "name": "Nvme$subsystem", 00:30:44.765 "trtype": "$TEST_TRANSPORT", 00:30:44.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.765 "adrfam": "ipv4", 00:30:44.765 "trsvcid": "$NVMF_PORT", 00:30:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.765 "hdgst": ${hdgst:-false}, 00:30:44.765 "ddgst": ${ddgst:-false} 00:30:44.765 }, 00:30:44.765 "method": "bdev_nvme_attach_controller" 00:30:44.765 } 00:30:44.765 EOF 00:30:44.765 )") 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:30:44.765 17:47:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:44.765 "params": { 00:30:44.765 "name": "Nvme1", 00:30:44.765 "trtype": "tcp", 00:30:44.765 "traddr": "10.0.0.2", 00:30:44.765 "adrfam": "ipv4", 00:30:44.765 "trsvcid": "4420", 00:30:44.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.765 "hdgst": false, 00:30:44.765 "ddgst": false 00:30:44.765 }, 00:30:44.765 "method": "bdev_nvme_attach_controller" 00:30:44.765 }' 00:30:44.765 [2024-10-14 17:47:43.699066] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:44.765 [2024-10-14 17:47:43.699115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260950 ] 00:30:44.765 [2024-10-14 17:47:43.767209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.765 [2024-10-14 17:47:43.805997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.025 Running I/O for 15 seconds... 00:30:46.896 11335.00 IOPS, 44.28 MiB/s [2024-10-14T15:47:46.979Z] 11394.00 IOPS, 44.51 MiB/s [2024-10-14T15:47:46.979Z] 17:47:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1260669 00:30:47.841 17:47:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:47.841 [2024-10-14 17:47:46.676522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.841 [2024-10-14 17:47:46.676803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.841 [2024-10-14 17:47:46.676812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.676988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.676999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.842 [2024-10-14 17:47:46.677551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.842 [2024-10-14 17:47:46.677564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.677985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.677991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.843 [2024-10-14 17:47:46.678189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.843 [2024-10-14 17:47:46.678195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:47.844 [2024-10-14 17:47:46.678653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c7cc0 is same with the state(6) to be set 00:30:47.844 [2024-10-14 17:47:46.678668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:47.844 [2024-10-14 17:47:46.678673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:47.844 [2024-10-14 17:47:46.678679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116344 len:8 PRP1 0x0 PRP2 0x0 00:30:47.844 [2024-10-14 17:47:46.678687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:47.844 [2024-10-14 17:47:46.678730] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26c7cc0 was disconnected and freed. reset controller. 00:30:47.844 [2024-10-14 17:47:46.681527] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.681578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.682113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.682129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.682137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.682311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.682484] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.682491] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.682499] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.685261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.694753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.695092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.695109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.695119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.695286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.695455] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.695464] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.695471] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.698193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.707714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.708076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.708093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.708101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.708268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.708437] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.708445] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.708451] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.711106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.720727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.721131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.721148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.721156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.721328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.721505] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.721513] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.721520] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.724275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.733719] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.734139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.734156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.734165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.734349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.734533] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.734541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.734548] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.737471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.746994] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.747431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.747448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.747456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.747664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.747860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.747870] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.747878] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.750892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.760183] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.760640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.760657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.760665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.760849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.761032] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.761041] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.761048] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.763984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.773253] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.773663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.773681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.773688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.773860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.774035] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.774043] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.774050] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.776966] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.786518] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.786894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.786911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.786919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.787091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.787283] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.787291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.787298] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.790229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.799545] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.800002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.800019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.800027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.845 [2024-10-14 17:47:46.800222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.845 [2024-10-14 17:47:46.800394] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.845 [2024-10-14 17:47:46.800402] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.845 [2024-10-14 17:47:46.800408] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.845 [2024-10-14 17:47:46.803158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.845 [2024-10-14 17:47:46.812584] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.845 [2024-10-14 17:47:46.813022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.845 [2024-10-14 17:47:46.813038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.845 [2024-10-14 17:47:46.813049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.813222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.813411] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.813420] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.813426] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.816251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.825593] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.826042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.826058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.826065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.826238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.826409] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.826418] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.826424] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.829234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.838813] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.839252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.839269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.839276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.839458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.839649] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.839658] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.839665] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.842880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.852155] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.852576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.852596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.852611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.852794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.852993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.853009] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.853016] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.855974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.865446] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.865876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.865893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.865901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.866085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.866269] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.866279] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.866285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.869143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.878554] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.878984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.879001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.879008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.879180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.879352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.879361] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.879367] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.882117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.891502] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.891938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.891983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.892006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.892586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.893175] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.893183] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.893190] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.898809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.906729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.907260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.907304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.907328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.907973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.908376] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.908387] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.908396] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.912454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.919731] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.920172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.920217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.920240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.846 [2024-10-14 17:47:46.920836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.846 [2024-10-14 17:47:46.921118] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.846 [2024-10-14 17:47:46.921126] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.846 [2024-10-14 17:47:46.921132] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.846 [2024-10-14 17:47:46.923800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.846 [2024-10-14 17:47:46.932555] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.846 [2024-10-14 17:47:46.932993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.846 [2024-10-14 17:47:46.933010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.846 [2024-10-14 17:47:46.933018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.847 [2024-10-14 17:47:46.933185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.847 [2024-10-14 17:47:46.933352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.847 [2024-10-14 17:47:46.933361] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.847 [2024-10-14 17:47:46.933367] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.847 [2024-10-14 17:47:46.936125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.847 [2024-10-14 17:47:46.945511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.847 [2024-10-14 17:47:46.945877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.847 [2024-10-14 17:47:46.945894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.847 [2024-10-14 17:47:46.945905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.847 [2024-10-14 17:47:46.946078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.847 [2024-10-14 17:47:46.946250] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.847 [2024-10-14 17:47:46.946259] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.847 [2024-10-14 17:47:46.946265] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.847 [2024-10-14 17:47:46.949013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.847 [2024-10-14 17:47:46.958567] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.847 [2024-10-14 17:47:46.959000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.847 [2024-10-14 17:47:46.959017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.847 [2024-10-14 17:47:46.959025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.847 [2024-10-14 17:47:46.959198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.847 [2024-10-14 17:47:46.959371] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.847 [2024-10-14 17:47:46.959379] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.847 [2024-10-14 17:47:46.959385] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.847 [2024-10-14 17:47:46.962130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.847 [2024-10-14 17:47:46.971599] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.847 [2024-10-14 17:47:46.972030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.847 [2024-10-14 17:47:46.972046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:47.847 [2024-10-14 17:47:46.972053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:47.847 [2024-10-14 17:47:46.972226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:47.847 [2024-10-14 17:47:46.972400] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.847 [2024-10-14 17:47:46.972408] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.847 [2024-10-14 17:47:46.972414] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.190 [2024-10-14 17:47:46.975173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.190 10266.67 IOPS, 40.10 MiB/s [2024-10-14T15:47:47.328Z] [2024-10-14 17:47:46.984617] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.190 [2024-10-14 17:47:46.985050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.190 [2024-10-14 17:47:46.985073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.190 [2024-10-14 17:47:46.985082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.190 [2024-10-14 17:47:46.985266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.190 [2024-10-14 17:47:46.985451] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.190 [2024-10-14 17:47:46.985468] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.190 [2024-10-14 17:47:46.985476] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.190 [2024-10-14 17:47:46.988354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.190 [2024-10-14 17:47:46.997836] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.190 [2024-10-14 17:47:46.998274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.190 [2024-10-14 17:47:46.998292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.190 [2024-10-14 17:47:46.998299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.190 [2024-10-14 17:47:46.998473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:46.998651] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:46.998661] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:46.998667] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.001402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.010686] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.011107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.011123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.011131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.011290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.011448] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.011456] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.011462] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.014084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.023407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.023856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.023902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.023927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.024459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.024632] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.024640] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.024646] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.030283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.038410] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.038935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.038957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.038968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.039220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.039473] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.039484] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.039493] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.043547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.051401] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.051833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.051878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.051902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.052482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.052713] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.052722] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.052728] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.055446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.064209] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.064673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.064720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.064744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.065323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.065535] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.065543] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.065549] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.068173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.076925] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.077349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.077393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.077417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.078019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.078556] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.078564] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.078570] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.081208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.089765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.090161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.090206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.090230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.090732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.090900] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.090908] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.090914] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.093582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.102500] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.102931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.102947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.102954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.103122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.103289] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.103297] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.103302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.105926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.115253] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.115662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.115699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.115724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.116304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.116861] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.116869] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.116879] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.119484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.127964] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.128379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.191 [2024-10-14 17:47:47.128428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.191 [2024-10-14 17:47:47.128452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.191 [2024-10-14 17:47:47.129046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.191 [2024-10-14 17:47:47.129639] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.191 [2024-10-14 17:47:47.129667] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.191 [2024-10-14 17:47:47.129688] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.191 [2024-10-14 17:47:47.132325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.191 [2024-10-14 17:47:47.140752] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.191 [2024-10-14 17:47:47.141135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.141150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.141157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.141331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.141499] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.141507] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.141513] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.144198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.153552] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.153967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.153983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.153990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.154149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.154307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.154314] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.154320] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.156940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.166409] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.166750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.166767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.166775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.166943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.167110] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.167119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.167125] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.169757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.179210] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.179632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.179677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.179701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.180280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.180717] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.180725] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.180731] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.183337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.192253] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.192658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.192675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.192682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.192854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.193026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.193034] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.193041] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.195780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.205257] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.205666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.205711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.205734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.206311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.206914] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.206947] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.206961] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.213181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.220209] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.220702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.220724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.220734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.220987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.221241] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.221252] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.221261] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.225325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.233289] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.233720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.233737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.233744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.233917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.234088] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.234097] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.234103] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.236851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.246130] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.246539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.246555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.246562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.246746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.246914] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.246922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.246928] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.249530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.258888] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.259316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.259358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.259381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.259917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.260086] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.260094] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.260100] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.262671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.192 [2024-10-14 17:47:47.271646] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.192 [2024-10-14 17:47:47.272038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.192 [2024-10-14 17:47:47.272067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.192 [2024-10-14 17:47:47.272091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.192 [2024-10-14 17:47:47.272685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.192 [2024-10-14 17:47:47.273223] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.192 [2024-10-14 17:47:47.273231] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.192 [2024-10-14 17:47:47.273237] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.192 [2024-10-14 17:47:47.275985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.193 [2024-10-14 17:47:47.284419] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.193 [2024-10-14 17:47:47.284860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.193 [2024-10-14 17:47:47.284876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.193 [2024-10-14 17:47:47.284883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.193 [2024-10-14 17:47:47.285051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.193 [2024-10-14 17:47:47.285218] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.193 [2024-10-14 17:47:47.285225] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.193 [2024-10-14 17:47:47.285231] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.193 [2024-10-14 17:47:47.287855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.193 [2024-10-14 17:47:47.297594] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.193 [2024-10-14 17:47:47.298047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.193 [2024-10-14 17:47:47.298066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.193 [2024-10-14 17:47:47.298082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.193 [2024-10-14 17:47:47.298272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.193 [2024-10-14 17:47:47.298460] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.193 [2024-10-14 17:47:47.298473] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.193 [2024-10-14 17:47:47.298480] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.193 [2024-10-14 17:47:47.301412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.193 [2024-10-14 17:47:47.310575] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.193 [2024-10-14 17:47:47.311013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.193 [2024-10-14 17:47:47.311059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.193 [2024-10-14 17:47:47.311083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.193 [2024-10-14 17:47:47.311679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.193 [2024-10-14 17:47:47.312245] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.193 [2024-10-14 17:47:47.312253] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.193 [2024-10-14 17:47:47.312259] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.193 [2024-10-14 17:47:47.314968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.513 [2024-10-14 17:47:47.323765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.324138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.324157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.324165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.324352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.324536] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.324549] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.324557] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.327438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.336893] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.337347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.337367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.337377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.337569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.337768] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.337782] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.337791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.340735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.349874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.350336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.350380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.350406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.351005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.351223] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.351231] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.351237] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.353950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.362790] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.363216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.363233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.363240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.363407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.363574] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.363582] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.363588] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.366331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.375580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.376039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.376084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.376108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.376568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.376742] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.376751] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.376757] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.379359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.388346] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.388757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.388773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.388780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.388937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.389094] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.389102] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.389108] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.391795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.401059] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.401477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.401521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.401545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.402139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.402494] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.402502] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.402508] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.405125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.413818] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.414234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.414249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.414256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.414415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.414573] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.414581] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.414587] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.417212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.426612] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.427033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.427077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.427108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.427498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.427679] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.427688] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.427694] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.430281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.439390] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.439827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.439844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.439851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.440023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.440194] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.514 [2024-10-14 17:47:47.440203] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.514 [2024-10-14 17:47:47.440209] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.514 [2024-10-14 17:47:47.442956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.514 [2024-10-14 17:47:47.452395] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.514 [2024-10-14 17:47:47.452813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.514 [2024-10-14 17:47:47.452830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.514 [2024-10-14 17:47:47.452837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.514 [2024-10-14 17:47:47.453010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.514 [2024-10-14 17:47:47.453182] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.453190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.453197] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.455945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.465327] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.465723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.465740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.465747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.465914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.466082] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.466092] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.466099] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.468815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.478108] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.478523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.478540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.478547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.478720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.478888] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.478896] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.478902] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.481508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.490951] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.491366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.491382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.491390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.491557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.491730] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.491739] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.491745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.494345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.503709] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.504024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.504040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.504047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.504205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.504363] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.504371] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.504377] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.506997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.516573] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.516984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.517000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.517007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.517174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.517341] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.517349] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.517356] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.519969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.529410] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.529800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.529817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.529824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.529992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.530159] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.530167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.530173] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.532797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.542335] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.542754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.542799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.542823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.543402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.543886] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.543895] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.543901] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.546480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.555164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.555574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.555591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.555598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.555776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.555944] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.555952] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.555958] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.558609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.567954] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.568288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.568304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.568311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.568478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.568652] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.568660] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.568667] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.571267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.580727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.581142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.581158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.581165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.515 [2024-10-14 17:47:47.581332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.515 [2024-10-14 17:47:47.581500] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.515 [2024-10-14 17:47:47.581508] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.515 [2024-10-14 17:47:47.581514] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.515 [2024-10-14 17:47:47.584115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.515 [2024-10-14 17:47:47.593603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.515 [2024-10-14 17:47:47.594029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.515 [2024-10-14 17:47:47.594045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.515 [2024-10-14 17:47:47.594053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.516 [2024-10-14 17:47:47.594219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.516 [2024-10-14 17:47:47.594387] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.516 [2024-10-14 17:47:47.594394] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.516 [2024-10-14 17:47:47.594404] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.516 [2024-10-14 17:47:47.597018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.516 [2024-10-14 17:47:47.606418] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.516 [2024-10-14 17:47:47.606826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.516 [2024-10-14 17:47:47.606842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.516 [2024-10-14 17:47:47.606849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.516 [2024-10-14 17:47:47.607017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.516 [2024-10-14 17:47:47.607183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.516 [2024-10-14 17:47:47.607192] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.516 [2024-10-14 17:47:47.607198] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.516 [2024-10-14 17:47:47.609890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.516 [2024-10-14 17:47:47.619145] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.516 [2024-10-14 17:47:47.619559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.516 [2024-10-14 17:47:47.619574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.516 [2024-10-14 17:47:47.619582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.516 [2024-10-14 17:47:47.619754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.516 [2024-10-14 17:47:47.619921] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.516 [2024-10-14 17:47:47.619930] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.516 [2024-10-14 17:47:47.619935] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.516 [2024-10-14 17:47:47.622718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.516 [2024-10-14 17:47:47.632335] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.516 [2024-10-14 17:47:47.632760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.516 [2024-10-14 17:47:47.632781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.516 [2024-10-14 17:47:47.632792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.516 [2024-10-14 17:47:47.632985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.516 [2024-10-14 17:47:47.633175] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.516 [2024-10-14 17:47:47.633188] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.516 [2024-10-14 17:47:47.633196] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.516 [2024-10-14 17:47:47.636068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.516 [2024-10-14 17:47:47.645330] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.516 [2024-10-14 17:47:47.645731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.516 [2024-10-14 17:47:47.645751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.516 [2024-10-14 17:47:47.645759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.516 [2024-10-14 17:47:47.645928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.516 [2024-10-14 17:47:47.646096] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.516 [2024-10-14 17:47:47.646104] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.516 [2024-10-14 17:47:47.646110] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.516 [2024-10-14 17:47:47.648768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.775 [2024-10-14 17:47:47.658359] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.775 [2024-10-14 17:47:47.658768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.775 [2024-10-14 17:47:47.658787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.775 [2024-10-14 17:47:47.658795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.775 [2024-10-14 17:47:47.658970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.775 [2024-10-14 17:47:47.659143] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.775 [2024-10-14 17:47:47.659151] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.775 [2024-10-14 17:47:47.659158] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.775 [2024-10-14 17:47:47.661837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.671105] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.671510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.671557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.671581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.672178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.672393] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.672401] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.672408] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.675012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.683901] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.684301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.684316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.684323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.684481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.684667] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.684675] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.684681] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.687343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.696751] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.697169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.697187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.697195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.697367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.697539] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.697548] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.697554] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.700307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.709729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.710136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.710152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.710160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.710332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.710504] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.710512] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.710519] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.713227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.722708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.723107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.723123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.723130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.723297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.723464] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.723472] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.723478] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.726211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.735520] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.735931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.735948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.735955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.736122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.736290] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.736298] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.736304] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.738920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.748286] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.748604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.748620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.748643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.748810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.748976] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.748984] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.748990] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.751617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.761142] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.761555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.761599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.761638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.762086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.762253] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.762261] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.762267] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.764871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.773925] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.774340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.774357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.774367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.774535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.774707] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.774716] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.774722] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.777325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.786790] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.787199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.787215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.787222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.787389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.776 [2024-10-14 17:47:47.787557] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.776 [2024-10-14 17:47:47.787565] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.776 [2024-10-14 17:47:47.787571] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.776 [2024-10-14 17:47:47.790183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.776 [2024-10-14 17:47:47.799610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.776 [2024-10-14 17:47:47.800025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.776 [2024-10-14 17:47:47.800041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.776 [2024-10-14 17:47:47.800049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.776 [2024-10-14 17:47:47.800217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.800384] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.800392] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.800398] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.803047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.812395] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.812859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.812903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.812927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.813429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.813597] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.813614] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.813622] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.816223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.825188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.825606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.825622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.825645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.825813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.825980] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.825988] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.825995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.828665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.838127] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.838514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.838559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.838582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.839179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.839417] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.839425] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.839431] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.842116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.850979] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.851379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.851396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.851404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.851570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.851763] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.851772] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.851779] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.854671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.863847] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.864206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.864223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.864230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.864398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.864564] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.864573] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.864580] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.867260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.876845] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.877301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.877318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.877326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.877498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.877677] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.877686] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.877693] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.880440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.889849] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.890196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.890213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.890220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.890392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.890565] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.890574] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.890580] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.893328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.777 [2024-10-14 17:47:47.902880] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.777 [2024-10-14 17:47:47.903237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.777 [2024-10-14 17:47:47.903254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:48.777 [2024-10-14 17:47:47.903265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:48.777 [2024-10-14 17:47:47.903436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:48.777 [2024-10-14 17:47:47.903614] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.777 [2024-10-14 17:47:47.903623] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.777 [2024-10-14 17:47:47.903628] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.777 [2024-10-14 17:47:47.906365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 [2024-10-14 17:47:47.916003] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.916435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.916452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.036 [2024-10-14 17:47:47.916460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.036 [2024-10-14 17:47:47.916637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.036 [2024-10-14 17:47:47.916810] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.036 [2024-10-14 17:47:47.916819] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.036 [2024-10-14 17:47:47.916826] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.036 [2024-10-14 17:47:47.919580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 [2024-10-14 17:47:47.929015] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.929387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.929404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.036 [2024-10-14 17:47:47.929411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.036 [2024-10-14 17:47:47.929584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.036 [2024-10-14 17:47:47.929763] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.036 [2024-10-14 17:47:47.929773] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.036 [2024-10-14 17:47:47.929779] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.036 [2024-10-14 17:47:47.932537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 [2024-10-14 17:47:47.942150] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.942494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.942510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.036 [2024-10-14 17:47:47.942518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.036 [2024-10-14 17:47:47.942697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.036 [2024-10-14 17:47:47.942879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.036 [2024-10-14 17:47:47.942887] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.036 [2024-10-14 17:47:47.942899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.036 [2024-10-14 17:47:47.945621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 [2024-10-14 17:47:47.955259] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.955676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.955693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.036 [2024-10-14 17:47:47.955701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.036 [2024-10-14 17:47:47.955873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.036 [2024-10-14 17:47:47.956045] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.036 [2024-10-14 17:47:47.956053] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.036 [2024-10-14 17:47:47.956060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.036 [2024-10-14 17:47:47.958810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 [2024-10-14 17:47:47.968205] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.968564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.968622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.036 [2024-10-14 17:47:47.968648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.036 [2024-10-14 17:47:47.969225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.036 [2024-10-14 17:47:47.969790] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.036 [2024-10-14 17:47:47.969799] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.036 [2024-10-14 17:47:47.969805] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.036 [2024-10-14 17:47:47.972524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.036 7700.00 IOPS, 30.08 MiB/s [2024-10-14T15:47:48.174Z] [2024-10-14 17:47:47.982414] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.036 [2024-10-14 17:47:47.982771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.036 [2024-10-14 17:47:47.982798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:47.982806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:47.982972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:47.983140] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:47.983148] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:47.983155] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:47.985826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:47.995269] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:47.995715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:47.995732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:47.995740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:47.995906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:47.996074] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:47.996082] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:47.996088] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:47.998693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.008116] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.008524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.008541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.008548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.008720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.008889] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.008897] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.008903] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.011579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.021037] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.021447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.021464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.021471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.021645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.021812] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.021820] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.021826] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.024494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.033931] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.034375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.034419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.034443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.034927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.035095] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.035103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.035109] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.037792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.046977] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.047404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.047419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.047427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.047599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.047776] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.047785] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.047791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.050525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.060061] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.060438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.060453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.060460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.060638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.060812] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.060820] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.060826] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.063568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.073138] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.073561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.073578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.073585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.073762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.073934] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.073943] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.073953] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.076701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.086031] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.086395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.086412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.086419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.086590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.086768] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.086778] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.086784] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.089499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.099003] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.099447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.099463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.099471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.099644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.099822] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.099830] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.099837] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.102504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.112011] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.037 [2024-10-14 17:47:48.112478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.037 [2024-10-14 17:47:48.112494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.037 [2024-10-14 17:47:48.112501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.037 [2024-10-14 17:47:48.112673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.037 [2024-10-14 17:47:48.112841] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.037 [2024-10-14 17:47:48.112849] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.037 [2024-10-14 17:47:48.112855] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.037 [2024-10-14 17:47:48.115522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.037 [2024-10-14 17:47:48.124915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.038 [2024-10-14 17:47:48.125316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.038 [2024-10-14 17:47:48.125335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.038 [2024-10-14 17:47:48.125343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.038 [2024-10-14 17:47:48.125510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.038 [2024-10-14 17:47:48.125682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.038 [2024-10-14 17:47:48.125691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.038 [2024-10-14 17:47:48.125697] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.038 [2024-10-14 17:47:48.128364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.038 [2024-10-14 17:47:48.137751] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.038 [2024-10-14 17:47:48.138125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.038 [2024-10-14 17:47:48.138140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.038 [2024-10-14 17:47:48.138148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.038 [2024-10-14 17:47:48.138316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.038 [2024-10-14 17:47:48.138483] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.038 [2024-10-14 17:47:48.138491] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.038 [2024-10-14 17:47:48.138497] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.038 [2024-10-14 17:47:48.141155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.038 [2024-10-14 17:47:48.150585] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.038 [2024-10-14 17:47:48.150923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.038 [2024-10-14 17:47:48.150940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.038 [2024-10-14 17:47:48.150947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.038 [2024-10-14 17:47:48.151114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.038 [2024-10-14 17:47:48.151281] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.038 [2024-10-14 17:47:48.151288] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.038 [2024-10-14 17:47:48.151294] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.038 [2024-10-14 17:47:48.153938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.038 [2024-10-14 17:47:48.163564] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.038 [2024-10-14 17:47:48.163963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.038 [2024-10-14 17:47:48.163980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.038 [2024-10-14 17:47:48.163987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.038 [2024-10-14 17:47:48.164154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.038 [2024-10-14 17:47:48.164325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.038 [2024-10-14 17:47:48.164333] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.038 [2024-10-14 17:47:48.164339] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.038 [2024-10-14 17:47:48.166983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.297 [2024-10-14 17:47:48.176765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.297 [2024-10-14 17:47:48.177057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.297 [2024-10-14 17:47:48.177073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.297 [2024-10-14 17:47:48.177080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.297 [2024-10-14 17:47:48.177252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.297 [2024-10-14 17:47:48.177425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.297 [2024-10-14 17:47:48.177433] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.297 [2024-10-14 17:47:48.177440] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.297 [2024-10-14 17:47:48.180144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.297 [2024-10-14 17:47:48.189710] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.297 [2024-10-14 17:47:48.190061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.297 [2024-10-14 17:47:48.190106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.297 [2024-10-14 17:47:48.190132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.297 [2024-10-14 17:47:48.190724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.297 [2024-10-14 17:47:48.191286] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.297 [2024-10-14 17:47:48.191295] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.297 [2024-10-14 17:47:48.191301] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.297 [2024-10-14 17:47:48.193997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.297 [2024-10-14 17:47:48.202557] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.297 [2024-10-14 17:47:48.203016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.297 [2024-10-14 17:47:48.203033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.297 [2024-10-14 17:47:48.203040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.297 [2024-10-14 17:47:48.203207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.297 [2024-10-14 17:47:48.203374] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.297 [2024-10-14 17:47:48.203382] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.297 [2024-10-14 17:47:48.203389] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.297 [2024-10-14 17:47:48.206008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.297 [2024-10-14 17:47:48.215689] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.215970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.215986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.215994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.216167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.216338] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.216347] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.216354] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.219109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.228724] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.229166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.229210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.229233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.229827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.230408] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.230433] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.230455] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.236691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.243673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.244182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.244203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.244214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.244467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.244728] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.244740] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.244749] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.248805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.256760] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.257113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.257129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.257139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.257312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.257484] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.257492] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.257498] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.260247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.269709] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.269988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.270004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.270011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.270179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.270345] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.270353] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.270359] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.272954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.282608] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.282943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.282959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.282966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.283134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.283301] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.283309] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.283315] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.285991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.295498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.295827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.295843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.295850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.296017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.296184] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.296195] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.296201] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.298836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.308429] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.308794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.308810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.308817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.308984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.309151] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.309159] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.309165] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.311830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.321367] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.321766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.321811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.321834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.322413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.322878] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.322887] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.322893] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.325547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.334194] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.334594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.334614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.334622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.334789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.334956] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.334964] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.334970] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.298 [2024-10-14 17:47:48.337573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.298 [2024-10-14 17:47:48.347005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.298 [2024-10-14 17:47:48.347408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.298 [2024-10-14 17:47:48.347424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.298 [2024-10-14 17:47:48.347432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.298 [2024-10-14 17:47:48.347599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.298 [2024-10-14 17:47:48.347773] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.298 [2024-10-14 17:47:48.347781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.298 [2024-10-14 17:47:48.347787] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.350391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.359752] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.360156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.360171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.360178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.360346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.360512] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.360520] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.360526] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.363174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.372566] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.372978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.372994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.373002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.373168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.373335] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.373343] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.373349] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.375970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.385333] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.385723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.385739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.385749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.385909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.386067] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.386075] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.386080] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.388697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.398161] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.398508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.398523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.398530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.398714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.398881] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.398890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.398896] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.401500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.410871] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.411279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.411324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.411348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.411882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.412281] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.412299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.412313] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.418539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.299 [2024-10-14 17:47:48.425843] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.299 [2024-10-14 17:47:48.426310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.299 [2024-10-14 17:47:48.426331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.299 [2024-10-14 17:47:48.426341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.299 [2024-10-14 17:47:48.426595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.299 [2024-10-14 17:47:48.426857] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.299 [2024-10-14 17:47:48.426873] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.299 [2024-10-14 17:47:48.426882] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.299 [2024-10-14 17:47:48.430936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.559 [2024-10-14 17:47:48.438935] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.559 [2024-10-14 17:47:48.439361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.559 [2024-10-14 17:47:48.439406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.559 [2024-10-14 17:47:48.439431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.559 [2024-10-14 17:47:48.439940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.559 [2024-10-14 17:47:48.440115] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.559 [2024-10-14 17:47:48.440124] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.559 [2024-10-14 17:47:48.440130] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.559 [2024-10-14 17:47:48.442907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.559 [2024-10-14 17:47:48.451790] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.559 [2024-10-14 17:47:48.452209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.559 [2024-10-14 17:47:48.452226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.559 [2024-10-14 17:47:48.452234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.559 [2024-10-14 17:47:48.452401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.559 [2024-10-14 17:47:48.452569] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.559 [2024-10-14 17:47:48.452578] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.559 [2024-10-14 17:47:48.452585] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.559 [2024-10-14 17:47:48.455242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.559 [2024-10-14 17:47:48.464696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.559 [2024-10-14 17:47:48.465126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.559 [2024-10-14 17:47:48.465143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.559 [2024-10-14 17:47:48.465150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.559 [2024-10-14 17:47:48.465322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.559 [2024-10-14 17:47:48.465494] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.559 [2024-10-14 17:47:48.465503] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.559 [2024-10-14 17:47:48.465509] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.559 [2024-10-14 17:47:48.468286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.559 [2024-10-14 17:47:48.477743] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.559 [2024-10-14 17:47:48.478172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.478188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.478195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.478368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.478543] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.478552] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.478558] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.481288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.490658] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.491074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.491115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.491140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.491731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.492209] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.492217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.492223] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.494893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.503537] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.503943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.503959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.503966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.504133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.504300] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.504309] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.504315] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.506998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.516397] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.516727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.516744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.516751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.516923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.517090] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.517099] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.517105] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.519738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.529253] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.529664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.529680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.529688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.529855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.530021] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.530029] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.530035] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.532665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.542034] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.542375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.542390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.542397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.542555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.542740] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.542749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.542755] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.545481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.554757] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.555181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.555225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.555248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.555845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.556310] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.556318] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.556328] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.558932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.567564] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.567886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.567901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.567909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.568067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.568230] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.568239] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.568244] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.570860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.580266] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.580710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.580756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.580780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.581268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.581435] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.581443] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.581449] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.584063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.592977] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.593389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.593438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.593461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.594054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.594374] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.594382] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.594388] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.600497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.607900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.560 [2024-10-14 17:47:48.608396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.560 [2024-10-14 17:47:48.608423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.560 [2024-10-14 17:47:48.608434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.560 [2024-10-14 17:47:48.608694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.560 [2024-10-14 17:47:48.608950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.560 [2024-10-14 17:47:48.608962] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.560 [2024-10-14 17:47:48.608971] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.560 [2024-10-14 17:47:48.613026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.560 [2024-10-14 17:47:48.620976] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.621410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.621464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.621488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.622082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.622334] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.622342] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.622349] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.625079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.633717] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.634104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.634120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.634127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.634285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.634443] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.634451] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.634456] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.637074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.646538] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.646948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.646964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.646972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.647139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.647309] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.647317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.647323] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.649934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.659297] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.659731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.659747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.659754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.659913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.660071] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.660079] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.660085] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.662751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.672123] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.672539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.672554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.672561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.672746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.672914] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.672922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.672929] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.675534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.684901] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.685287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.685302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.685309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.685467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.561 [2024-10-14 17:47:48.685647] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.561 [2024-10-14 17:47:48.685656] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.561 [2024-10-14 17:47:48.685662] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.561 [2024-10-14 17:47:48.688331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.561 [2024-10-14 17:47:48.697980] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.561 [2024-10-14 17:47:48.698405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.561 [2024-10-14 17:47:48.698422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.561 [2024-10-14 17:47:48.698430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.561 [2024-10-14 17:47:48.698610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.821 [2024-10-14 17:47:48.698783] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.821 [2024-10-14 17:47:48.698792] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.821 [2024-10-14 17:47:48.698799] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.821 [2024-10-14 17:47:48.701493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.821 [2024-10-14 17:47:48.710695] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.821 [2024-10-14 17:47:48.711116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.821 [2024-10-14 17:47:48.711133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.821 [2024-10-14 17:47:48.711141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.821 [2024-10-14 17:47:48.711308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.821 [2024-10-14 17:47:48.711475] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.821 [2024-10-14 17:47:48.711483] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.821 [2024-10-14 17:47:48.711489] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.821 [2024-10-14 17:47:48.714242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.821 [2024-10-14 17:47:48.723544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.821 [2024-10-14 17:47:48.723949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.821 [2024-10-14 17:47:48.723966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.821 [2024-10-14 17:47:48.723974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.821 [2024-10-14 17:47:48.724142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.821 [2024-10-14 17:47:48.724309] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.821 [2024-10-14 17:47:48.724317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.821 [2024-10-14 17:47:48.724323] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.821 [2024-10-14 17:47:48.727083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.821 [2024-10-14 17:47:48.736515] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.821 [2024-10-14 17:47:48.736953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.821 [2024-10-14 17:47:48.736970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.821 [2024-10-14 17:47:48.736981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.821 [2024-10-14 17:47:48.737153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.821 [2024-10-14 17:47:48.737326] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.821 [2024-10-14 17:47:48.737334] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.821 [2024-10-14 17:47:48.737340] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.821 [2024-10-14 17:47:48.740085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.821 [2024-10-14 17:47:48.749298] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.821 [2024-10-14 17:47:48.749619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.821 [2024-10-14 17:47:48.749635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.821 [2024-10-14 17:47:48.749642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.749809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.749976] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.749984] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.749990] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.752624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.762082] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.762530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.762545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.762551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.762734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.762901] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.762909] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.762915] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.765517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.774915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.775339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.775384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.775408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.775809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.775978] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.775989] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.775995] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.778658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.787672] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.788081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.788097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.788104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.788262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.788420] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.788428] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.788434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.791052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.800416] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.800726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.800742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.800750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.800916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.801083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.801091] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.801097] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.803725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.813249] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.813669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.813715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.813740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.814318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.814911] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.814938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.814959] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.817570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.826103] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.826545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.826560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.826567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.826758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.826937] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.826945] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.826951] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.829558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.838888] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.839303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.839319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.839325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.839484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.839665] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.839674] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.839680] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.842285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.851733] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.852171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.852218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.852242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.852739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.852907] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.852915] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.852922] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.855584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.864491] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.864930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.822 [2024-10-14 17:47:48.864947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.822 [2024-10-14 17:47:48.864954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.822 [2024-10-14 17:47:48.865126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.822 [2024-10-14 17:47:48.865294] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.822 [2024-10-14 17:47:48.865302] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.822 [2024-10-14 17:47:48.865308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.822 [2024-10-14 17:47:48.867962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.822 [2024-10-14 17:47:48.877360] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.822 [2024-10-14 17:47:48.877688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.877706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.877715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.877884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.878053] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.878061] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.878066] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.880683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.890180] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.890473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.890489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.890496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.890668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.890837] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.890845] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.890851] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.893515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.903026] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.903390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.903427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.903453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.904013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.904183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.904192] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.904202] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.906836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.916020] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.916419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.916435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.916442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.916619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.916792] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.916801] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.916807] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.919548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.929104] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.929505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.929521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.929528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.929705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.929878] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.929886] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.929893] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.932639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.942182] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.942548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.942564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.942571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.942742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.942909] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.942917] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.942923] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.945682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.823 [2024-10-14 17:47:48.955108] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.823 [2024-10-14 17:47:48.955469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.823 [2024-10-14 17:47:48.955484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:49.823 [2024-10-14 17:47:48.955491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:49.823 [2024-10-14 17:47:48.955663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:49.823 [2024-10-14 17:47:48.955855] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.823 [2024-10-14 17:47:48.955865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.823 [2024-10-14 17:47:48.955871] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.823 [2024-10-14 17:47:48.958652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.083 [2024-10-14 17:47:48.968152] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.083 [2024-10-14 17:47:48.968439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.083 [2024-10-14 17:47:48.968455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.083 [2024-10-14 17:47:48.968463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.083 [2024-10-14 17:47:48.968636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.083 [2024-10-14 17:47:48.968804] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.083 [2024-10-14 17:47:48.968812] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.083 [2024-10-14 17:47:48.968818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.083 [2024-10-14 17:47:48.971473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.083 [2024-10-14 17:47:48.980980] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.083 6160.00 IOPS, 24.06 MiB/s [2024-10-14T15:47:49.221Z] [2024-10-14 17:47:48.982606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.083 [2024-10-14 17:47:48.982622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.083 [2024-10-14 17:47:48.982630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.083 [2024-10-14 17:47:48.982802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.083 [2024-10-14 17:47:48.982974] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:48.982990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:48.982996] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:48.985744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:48.994053] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:48.994451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:48.994467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:48.994474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:48.994653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:48.994826] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:48.994834] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:48.994841] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:48.997565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.007048] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.007486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.007502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.007509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.007682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.007850] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.007858] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.007864] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.010563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.019843] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.020246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.020289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.020313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.020938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.021403] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.021411] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.021417] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.024024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.032558] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.032972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.032989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.032996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.033154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.033312] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.033320] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.033330] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.035950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.045388] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.045837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.045853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.045860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.046019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.046177] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.046185] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.046191] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.048812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.058209] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.058572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.058588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.058595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.058787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.058960] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.058968] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.058974] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.061632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.070949] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.071393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.071409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.071417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.071583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.071755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.071764] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.071770] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.074373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.083753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.084158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.084176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.084183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.084341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.084500] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.084507] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.084513] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.087135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.096553] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.096878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.096894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.096901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.097059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.097217] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.097225] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.097230] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.099834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.109349] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.109742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.109758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.084 [2024-10-14 17:47:49.109765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.084 [2024-10-14 17:47:49.109922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.084 [2024-10-14 17:47:49.110080] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.084 [2024-10-14 17:47:49.110087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.084 [2024-10-14 17:47:49.110093] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.084 [2024-10-14 17:47:49.112684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.084 [2024-10-14 17:47:49.122096] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.084 [2024-10-14 17:47:49.122507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.084 [2024-10-14 17:47:49.122522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.122529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.122712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.122882] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.122890] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.122896] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.125502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.134904] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.135338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.135381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.135405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.135997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.136572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.136580] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.136586] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.139224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.147735] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.148130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.148174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.148197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.148716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.148885] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.148893] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.148899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.151484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.160544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.160958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.160974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.160981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.161139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.161297] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.161305] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.161311] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.163997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.173374] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.173687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.173703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.173710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.173868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.174026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.174034] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.174039] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.176658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.186201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.186607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.186624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.186648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.186815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.186982] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.186990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.186996] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.189632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.198978] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.199392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.199407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.199414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.199571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.199756] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.199765] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.199771] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.202375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.085 [2024-10-14 17:47:49.211700] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.085 [2024-10-14 17:47:49.212028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.085 [2024-10-14 17:47:49.212044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.085 [2024-10-14 17:47:49.212055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.085 [2024-10-14 17:47:49.212223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.085 [2024-10-14 17:47:49.212390] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.085 [2024-10-14 17:47:49.212398] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.085 [2024-10-14 17:47:49.212404] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.085 [2024-10-14 17:47:49.215063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.345 [2024-10-14 17:47:49.224721] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.345 [2024-10-14 17:47:49.225161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.345 [2024-10-14 17:47:49.225199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.345 [2024-10-14 17:47:49.225225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.345 [2024-10-14 17:47:49.225768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.345 [2024-10-14 17:47:49.225941] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.345 [2024-10-14 17:47:49.225949] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.345 [2024-10-14 17:47:49.225955] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.345 [2024-10-14 17:47:49.228725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.345 [2024-10-14 17:47:49.237526] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.345 [2024-10-14 17:47:49.237971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.345 [2024-10-14 17:47:49.237988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.345 [2024-10-14 17:47:49.237996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.345 [2024-10-14 17:47:49.238168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.345 [2024-10-14 17:47:49.238340] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.345 [2024-10-14 17:47:49.238348] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.345 [2024-10-14 17:47:49.238355] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.345 [2024-10-14 17:47:49.241102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.345 [2024-10-14 17:47:49.250527] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.345 [2024-10-14 17:47:49.250933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.345 [2024-10-14 17:47:49.250950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.345 [2024-10-14 17:47:49.250958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.345 [2024-10-14 17:47:49.251130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.345 [2024-10-14 17:47:49.251302] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.345 [2024-10-14 17:47:49.251316] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.345 [2024-10-14 17:47:49.251322] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.345 [2024-10-14 17:47:49.254063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.345 [2024-10-14 17:47:49.263544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.345 [2024-10-14 17:47:49.264006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.345 [2024-10-14 17:47:49.264022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.345 [2024-10-14 17:47:49.264030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.345 [2024-10-14 17:47:49.264196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.345 [2024-10-14 17:47:49.264362] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.345 [2024-10-14 17:47:49.264370] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.345 [2024-10-14 17:47:49.264376] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.345 [2024-10-14 17:47:49.266983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.345 [2024-10-14 17:47:49.276254] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.345 [2024-10-14 17:47:49.276672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.345 [2024-10-14 17:47:49.276688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.345 [2024-10-14 17:47:49.276695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.276854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.277012] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.277020] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.277025] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.279643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.289045] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.289466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.289509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.289532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.290009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.290178] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.290186] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.290193] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.292803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.301792] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.302211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.302226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.302233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.302391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.302550] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.302558] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.302563] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.305187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.314661] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.314989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.315005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.315013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.315180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.315348] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.315356] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.315362] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.317973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.327561] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.328025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.328069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.328093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.328688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.328941] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.328950] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.328956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.331632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.340431] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.340852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.340869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.340877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.341048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.341216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.341226] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.341232] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.343888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.353307] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.353592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.353612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.353620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.353787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.353953] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.353963] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.353969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.356569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.366388] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.366676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.366693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.366701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.366872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.367045] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.367053] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.367060] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.369815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.379348] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.379624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.379641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.379648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.379815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.379989] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.379996] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.380005] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.382594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.392261] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.392592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.392616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.392623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.392795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.392975] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.392983] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.392989] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.395594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.405098] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.405398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.405414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.346 [2024-10-14 17:47:49.405421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.346 [2024-10-14 17:47:49.405588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.346 [2024-10-14 17:47:49.405761] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.346 [2024-10-14 17:47:49.405770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.346 [2024-10-14 17:47:49.405776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.346 [2024-10-14 17:47:49.408446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.346 [2024-10-14 17:47:49.418010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.346 [2024-10-14 17:47:49.418295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.346 [2024-10-14 17:47:49.418310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.418317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.418484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.418657] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.418665] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.418671] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.347 [2024-10-14 17:47:49.421278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.347 [2024-10-14 17:47:49.430782] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.347 [2024-10-14 17:47:49.431063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.347 [2024-10-14 17:47:49.431078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.431085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.431252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.431419] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.431428] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.431434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.347 [2024-10-14 17:47:49.434043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.347 [2024-10-14 17:47:49.443654] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.347 [2024-10-14 17:47:49.444002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.347 [2024-10-14 17:47:49.444017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.444025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.444191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.444358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.444366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.444372] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.347 [2024-10-14 17:47:49.447085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.347 [2024-10-14 17:47:49.456616] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.347 [2024-10-14 17:47:49.456890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.347 [2024-10-14 17:47:49.456906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.456913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.457080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.457247] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.457255] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.457261] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.347 [2024-10-14 17:47:49.459934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.347 [2024-10-14 17:47:49.469433] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.347 [2024-10-14 17:47:49.469726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.347 [2024-10-14 17:47:49.469743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.469750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.469920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.470088] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.470096] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.470102] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.347 [2024-10-14 17:47:49.472719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.347 [2024-10-14 17:47:49.482437] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.347 [2024-10-14 17:47:49.482789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.347 [2024-10-14 17:47:49.482807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.347 [2024-10-14 17:47:49.482815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.347 [2024-10-14 17:47:49.482988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.347 [2024-10-14 17:47:49.483161] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.347 [2024-10-14 17:47:49.483169] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.347 [2024-10-14 17:47:49.483176] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.607 [2024-10-14 17:47:49.485959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.607 [2024-10-14 17:47:49.495398] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.607 [2024-10-14 17:47:49.495690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.607 [2024-10-14 17:47:49.495707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.607 [2024-10-14 17:47:49.495714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.607 [2024-10-14 17:47:49.495902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.607 [2024-10-14 17:47:49.496075] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.607 [2024-10-14 17:47:49.496084] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.607 [2024-10-14 17:47:49.496090] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.607 [2024-10-14 17:47:49.498840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.607 [2024-10-14 17:47:49.508435] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.607 [2024-10-14 17:47:49.508835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.607 [2024-10-14 17:47:49.508852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.607 [2024-10-14 17:47:49.508860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.607 [2024-10-14 17:47:49.509032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.607 [2024-10-14 17:47:49.509204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.607 [2024-10-14 17:47:49.509212] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.607 [2024-10-14 17:47:49.509221] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.607 [2024-10-14 17:47:49.511923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.607 [2024-10-14 17:47:49.521473] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.607 [2024-10-14 17:47:49.521854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.607 [2024-10-14 17:47:49.521871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.607 [2024-10-14 17:47:49.521878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.607 [2024-10-14 17:47:49.522049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.607 [2024-10-14 17:47:49.522222] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.607 [2024-10-14 17:47:49.522231] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.607 [2024-10-14 17:47:49.522237] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.607 [2024-10-14 17:47:49.524990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.607 [2024-10-14 17:47:49.534891] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.607 [2024-10-14 17:47:49.535348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.607 [2024-10-14 17:47:49.535365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.607 [2024-10-14 17:47:49.535373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.607 [2024-10-14 17:47:49.535556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.607 [2024-10-14 17:47:49.535746] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.607 [2024-10-14 17:47:49.535756] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.607 [2024-10-14 17:47:49.535762] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.607 [2024-10-14 17:47:49.538608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.607 [2024-10-14 17:47:49.547903] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.607 [2024-10-14 17:47:49.548322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.607 [2024-10-14 17:47:49.548366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.607 [2024-10-14 17:47:49.548390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.607 [2024-10-14 17:47:49.548981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.607 [2024-10-14 17:47:49.549565] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.607 [2024-10-14 17:47:49.549590] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.607 [2024-10-14 17:47:49.549629] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.552374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.560916] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.561337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.561388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.561412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.561824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.561999] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.562007] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.562013] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.564744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.573747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.574036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.574052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.574060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.574232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.574390] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.574398] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.574403] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.577021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.586531] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.586829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.586845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.586853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.587020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.587187] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.587195] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.587201] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.589830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.599392] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.599811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.599856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.599880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.600458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.600669] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.600678] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.600684] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.603333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.612300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.612655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.612672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.612679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.612847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.613014] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.613022] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.613028] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.615690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.625147] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.625532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.625581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.625616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.626197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.626788] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.626807] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.626821] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.633056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.640035] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.640548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.640570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.640581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.640842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.641099] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.641110] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.641119] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.645182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 [2024-10-14 17:47:49.653027] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.653386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.653403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.653410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.653582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.653761] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.653770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.653776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.656521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1260669 Killed "${NVMF_APP[@]}" "$@" 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.608 [2024-10-14 17:47:49.666040] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.608 [2024-10-14 17:47:49.666415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.666432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.608 [2024-10-14 17:47:49.666439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.608 [2024-10-14 17:47:49.666611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.608 [2024-10-14 17:47:49.666798] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.608 [2024-10-14 17:47:49.666807] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.608 [2024-10-14 17:47:49.666813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.608 [2024-10-14 17:47:49.669556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1262043 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1262043 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1262043 ']' 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:50.608 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.608 [2024-10-14 17:47:49.679130] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.608 [2024-10-14 17:47:49.679583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.608 [2024-10-14 17:47:49.679603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.679613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.679785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.679959] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.679967] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.679974] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.609 [2024-10-14 17:47:49.682725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.609 [2024-10-14 17:47:49.692137] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.609 [2024-10-14 17:47:49.692484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.609 [2024-10-14 17:47:49.692500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.692508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.692685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.692858] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.692865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.692871] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.609 [2024-10-14 17:47:49.695617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.609 [2024-10-14 17:47:49.705168] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.609 [2024-10-14 17:47:49.705587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.609 [2024-10-14 17:47:49.705609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.705617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.705790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.705962] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.705971] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.705977] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.609 [2024-10-14 17:47:49.708729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.609 [2024-10-14 17:47:49.718193] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.609 [2024-10-14 17:47:49.718618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.609 [2024-10-14 17:47:49.718634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.718645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.718818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.718990] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.718999] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.719006] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.609 [2024-10-14 17:47:49.721389] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:30:50.609 [2024-10-14 17:47:49.721428] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.609 [2024-10-14 17:47:49.721767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.609 [2024-10-14 17:47:49.731288] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.609 [2024-10-14 17:47:49.731716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.609 [2024-10-14 17:47:49.731733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.731741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.731914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.732087] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.732096] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.732102] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.609 [2024-10-14 17:47:49.734835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.609 [2024-10-14 17:47:49.744433] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.609 [2024-10-14 17:47:49.744877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.609 [2024-10-14 17:47:49.744896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.609 [2024-10-14 17:47:49.744904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.609 [2024-10-14 17:47:49.745079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.609 [2024-10-14 17:47:49.745251] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.609 [2024-10-14 17:47:49.745260] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.609 [2024-10-14 17:47:49.745267] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.869 [2024-10-14 17:47:49.748045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.869 [2024-10-14 17:47:49.757438] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.869 [2024-10-14 17:47:49.757878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.869 [2024-10-14 17:47:49.757894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.869 [2024-10-14 17:47:49.757906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.869 [2024-10-14 17:47:49.758080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.869 [2024-10-14 17:47:49.758253] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.869 [2024-10-14 17:47:49.758262] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.869 [2024-10-14 17:47:49.758269] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.869 [2024-10-14 17:47:49.761016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.869 [2024-10-14 17:47:49.770405] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.869 [2024-10-14 17:47:49.770811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.869 [2024-10-14 17:47:49.770828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.869 [2024-10-14 17:47:49.770836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.869 [2024-10-14 17:47:49.771008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.869 [2024-10-14 17:47:49.771180] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.869 [2024-10-14 17:47:49.771188] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.869 [2024-10-14 17:47:49.771195] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.869 [2024-10-14 17:47:49.773948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.869 [2024-10-14 17:47:49.783498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.869 [2024-10-14 17:47:49.783903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.869 [2024-10-14 17:47:49.783920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.869 [2024-10-14 17:47:49.783928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.869 [2024-10-14 17:47:49.784099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.869 [2024-10-14 17:47:49.784270] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.869 [2024-10-14 17:47:49.784279] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.869 [2024-10-14 17:47:49.784285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.869 [2024-10-14 17:47:49.787030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.795489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.870 [2024-10-14 17:47:49.796563] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.796970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.796987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.796994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.797167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.797344] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.797352] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.797358] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.800085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.809577] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.810017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.810036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.810044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.810217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.810391] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.810400] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.810406] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.813125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.822585] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.822919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.822936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.822944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.823119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.823293] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.823302] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.823308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.826054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.835506] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.835916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.835933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.835941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.836114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.836286] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.836295] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.836302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.838131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.870 [2024-10-14 17:47:49.838158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.870 [2024-10-14 17:47:49.838165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.870 [2024-10-14 17:47:49.838171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.870 [2024-10-14 17:47:49.838176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.870 [2024-10-14 17:47:49.839067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.839581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.870 [2024-10-14 17:47:49.839687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.870 [2024-10-14 17:47:49.839688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.870 [2024-10-14 17:47:49.848498] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.848931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.848951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.848960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.849134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.849307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.849316] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.849323] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.852074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.861459] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.861884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.861902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.861911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.862085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.862257] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.862266] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.862273] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.865020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.874424] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.874852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.874872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.874880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.875053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.875232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.875241] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.875248] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.877994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.887376] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.887793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.887813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.887822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.887995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.888167] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.888176] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.888183] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.890931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.900478] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.900899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.900918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.900927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.901100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.901273] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.901281] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.901288] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.870 [2024-10-14 17:47:49.904033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.870 [2024-10-14 17:47:49.913421] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.870 [2024-10-14 17:47:49.913827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.870 [2024-10-14 17:47:49.913844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.870 [2024-10-14 17:47:49.913852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.870 [2024-10-14 17:47:49.914025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.870 [2024-10-14 17:47:49.914197] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.870 [2024-10-14 17:47:49.914206] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.870 [2024-10-14 17:47:49.914213] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.916960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 [2024-10-14 17:47:49.926502] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.926924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.926942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.926950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.927123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.927299] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.927307] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.927314] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.930061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.871 [2024-10-14 17:47:49.939610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.939889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.939906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.939914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.940087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.940264] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.940272] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.940278] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.943029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 [2024-10-14 17:47:49.952582] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.952923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.952941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.952949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.953122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.953294] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.953304] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.953310] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.956060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 [2024-10-14 17:47:49.965632] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.965916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.965933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.965941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.966113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.966286] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.966295] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.966304] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.969059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.871 [2024-10-14 17:47:49.976027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.871 [2024-10-14 17:47:49.978642] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.979041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.979058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.979066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.979238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.979410] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.979419] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.979425] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.871 [2024-10-14 17:47:49.982182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 17:47:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:50.871 5133.33 IOPS, 20.05 MiB/s [2024-10-14T15:47:50.009Z] [2024-10-14 17:47:49.991711] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:49.992132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:49.992149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:49.992156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:49.992330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:49.992505] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:49.992514] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:49.992520] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.871 [2024-10-14 17:47:49.995265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.871 [2024-10-14 17:47:50.005058] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.871 [2024-10-14 17:47:50.005499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.871 [2024-10-14 17:47:50.005517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:50.871 [2024-10-14 17:47:50.005525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:50.871 [2024-10-14 17:47:50.005728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:50.871 [2024-10-14 17:47:50.005925] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.871 [2024-10-14 17:47:50.005935] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:50.871 [2024-10-14 17:47:50.005942] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.130 [2024-10-14 17:47:50.009127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.130 [2024-10-14 17:47:50.018650] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.130 Malloc0 00:30:51.130 [2024-10-14 17:47:50.019021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-10-14 17:47:50.019040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:51.130 [2024-10-14 17:47:50.019049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:51.130 [2024-10-14 17:47:50.019232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:51.130 [2024-10-14 17:47:50.019417] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.130 [2024-10-14 17:47:50.019426] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.130 [2024-10-14 17:47:50.019434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.130 [2024-10-14 17:47:50.022356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.130 [2024-10-14 17:47:50.031747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.130 [2024-10-14 17:47:50.032176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-10-14 17:47:50.032198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249f5c0 with addr=10.0.0.2, port=4420 00:30:51.130 [2024-10-14 17:47:50.032206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249f5c0 is same with the state(6) to be set 00:30:51.130 [2024-10-14 17:47:50.032379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f5c0 (9): Bad file descriptor 00:30:51.130 [2024-10-14 17:47:50.032564] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.130 [2024-10-14 17:47:50.032574] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.130 [2024-10-14 17:47:50.032581] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.130 [2024-10-14 17:47:50.035329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:51.130 [2024-10-14 17:47:50.042875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.130 [2024-10-14 17:47:50.044713] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.130 17:47:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1260950 00:30:51.130 [2024-10-14 17:47:50.074748] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.001 5901.71 IOPS, 23.05 MiB/s [2024-10-14T15:47:53.075Z] 6590.12 IOPS, 25.74 MiB/s [2024-10-14T15:47:54.012Z] 7131.00 IOPS, 27.86 MiB/s [2024-10-14T15:47:55.390Z] 7579.20 IOPS, 29.61 MiB/s [2024-10-14T15:47:56.327Z] 7920.36 IOPS, 30.94 MiB/s [2024-10-14T15:47:57.262Z] 8215.08 IOPS, 32.09 MiB/s [2024-10-14T15:47:58.198Z] 8456.69 IOPS, 33.03 MiB/s [2024-10-14T15:47:59.133Z] 8679.00 IOPS, 33.90 MiB/s 00:30:59.995 Latency(us) 00:30:59.995 [2024-10-14T15:47:59.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:59.995 Verification LBA range: start 0x0 length 0x4000 00:30:59.995 Nvme1n1 : 15.01 8858.40 34.60 10853.12 0.00 6474.15 600.75 25090.93 00:30:59.995 [2024-10-14T15:47:59.133Z] =================================================================================================================== 00:30:59.995 [2024-10-14T15:47:59.133Z] Total : 8858.40 34.60 10853.12 0.00 6474.15 600.75 25090.93 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:00.254 rmmod nvme_tcp 00:31:00.254 rmmod nvme_fabrics 00:31:00.254 rmmod nvme_keyring 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1262043 ']' 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1262043 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1262043 ']' 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1262043 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1262043 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1262043' 00:31:00.254 killing process with pid 1262043 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1262043 00:31:00.254 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1262043 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:00.513 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:31:00.514 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:00.514 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:00.514 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.514 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.514 17:47:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.419 17:48:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:02.419 00:31:02.419 real 0m26.096s 00:31:02.419 user 1m1.010s 00:31:02.419 sys 0m6.639s 00:31:02.419 17:48:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.419 17:48:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.419 ************************************ 00:31:02.419 END TEST nvmf_bdevperf 00:31:02.419 ************************************ 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.678 ************************************ 00:31:02.678 START TEST nvmf_target_disconnect 00:31:02.678 ************************************ 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:02.678 * Looking for test storage... 00:31:02.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.678 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.679 --rc genhtml_branch_coverage=1 00:31:02.679 --rc genhtml_function_coverage=1 00:31:02.679 --rc genhtml_legend=1 00:31:02.679 --rc geninfo_all_blocks=1 00:31:02.679 --rc geninfo_unexecuted_blocks=1 00:31:02.679 00:31:02.679 ' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.679 --rc genhtml_branch_coverage=1 00:31:02.679 --rc genhtml_function_coverage=1 00:31:02.679 --rc genhtml_legend=1 00:31:02.679 --rc geninfo_all_blocks=1 00:31:02.679 --rc geninfo_unexecuted_blocks=1 00:31:02.679 00:31:02.679 ' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.679 --rc genhtml_branch_coverage=1 00:31:02.679 --rc genhtml_function_coverage=1 00:31:02.679 --rc genhtml_legend=1 00:31:02.679 --rc geninfo_all_blocks=1 00:31:02.679 --rc geninfo_unexecuted_blocks=1 00:31:02.679 00:31:02.679 ' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:02.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.679 --rc genhtml_branch_coverage=1 00:31:02.679 --rc genhtml_function_coverage=1 00:31:02.679 --rc genhtml_legend=1 00:31:02.679 --rc geninfo_all_blocks=1 00:31:02.679 --rc geninfo_unexecuted_blocks=1 00:31:02.679 00:31:02.679 ' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.679 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:02.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.939 17:48:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.505 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:09.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:09.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:09.506 Found net devices under 0000:86:00.0: cvl_0_0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:09.506 Found net devices under 0000:86:00.1: cvl_0_1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:09.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:31:09.506 00:31:09.506 --- 10.0.0.2 ping statistics --- 00:31:09.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.506 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:31:09.506 00:31:09.506 --- 10.0.0.1 ping statistics --- 00:31:09.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.506 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:09.506 ************************************ 00:31:09.506 START TEST nvmf_target_disconnect_tc1 00:31:09.506 ************************************ 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:09.506 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.507 [2024-10-14 17:48:07.898160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.507 [2024-10-14 17:48:07.898206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc8b70 with addr=10.0.0.2, port=4420 00:31:09.507 [2024-10-14 17:48:07.898230] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.507 [2024-10-14 17:48:07.898240] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.507 [2024-10-14 17:48:07.898247] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:09.507 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:09.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:09.507 Initializing NVMe Controllers 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:09.507 00:31:09.507 real 0m0.114s 00:31:09.507 user 0m0.047s 00:31:09.507 sys 0m0.067s 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 ************************************ 00:31:09.507 END TEST nvmf_target_disconnect_tc1 00:31:09.507 ************************************ 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 ************************************ 00:31:09.507 START TEST nvmf_target_disconnect_tc2 00:31:09.507 ************************************ 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1267157 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1267157 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1267157 ']' 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.507 17:48:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 [2024-10-14 17:48:08.036624] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:31:09.507 [2024-10-14 17:48:08.036670] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.507 [2024-10-14 17:48:08.109853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.507 [2024-10-14 17:48:08.153650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.507 [2024-10-14 17:48:08.153683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.507 [2024-10-14 17:48:08.153690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.507 [2024-10-14 17:48:08.153696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.507 [2024-10-14 17:48:08.153702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.507 [2024-10-14 17:48:08.155287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:09.507 [2024-10-14 17:48:08.155419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:09.507 [2024-10-14 17:48:08.155444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:09.507 [2024-10-14 17:48:08.155445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 Malloc0 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 [2024-10-14 17:48:08.321680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 [2024-10-14 17:48:08.353962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1267412 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:09.507 17:48:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.426 17:48:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1267157 00:31:11.426 17:48:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Write completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 [2024-10-14 17:48:10.381803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.426 Read completed with error (sct=0, sc=8) 00:31:11.426 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 [2024-10-14 17:48:10.382010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 [2024-10-14 17:48:10.382211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Write completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 Read completed with error (sct=0, sc=8) 00:31:11.427 starting I/O failed 00:31:11.427 [2024-10-14 17:48:10.382407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:11.427 [2024-10-14 17:48:10.382611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.382634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.382846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.382858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.383016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.383028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.383114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.383124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.383252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.383263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.383362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.427 [2024-10-14 17:48:10.383372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.427 qpair failed and we were unable to recover it. 00:31:11.427 [2024-10-14 17:48:10.383526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.383536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.383617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.383627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.383789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.383799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.383874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.383883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.384901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.384911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.385865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.385991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.386895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.386905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.387902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.387913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.428 qpair failed and we were unable to recover it. 00:31:11.428 [2024-10-14 17:48:10.388525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.428 [2024-10-14 17:48:10.388537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.388617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.388627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.388760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.388769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.388845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.388855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.388913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.388922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.388979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.388989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.389870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.389880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.390897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.390907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.391934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.391943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.392028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.429 [2024-10-14 17:48:10.392038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.429 qpair failed and we were unable to recover it. 00:31:11.429 [2024-10-14 17:48:10.392198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.392955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.392964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.393820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.393829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.394882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.394892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.395931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.395997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.396006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.396085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.396094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.396167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.396177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.430 [2024-10-14 17:48:10.396235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.430 [2024-10-14 17:48:10.396244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.430 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.396935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.396948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.397835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.397988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.398975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.398988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.431 [2024-10-14 17:48:10.399717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.431 qpair failed and we were unable to recover it. 00:31:11.431 [2024-10-14 17:48:10.399845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.399858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.400946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.400960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.401966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.401980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.402853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.402867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.403970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.403983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.432 [2024-10-14 17:48:10.404771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.432 [2024-10-14 17:48:10.404785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.432 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.404863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.404876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.404953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.404966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.405967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.405987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.406952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.406968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.407183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.407201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.407340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.407357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.407519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.407550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.407760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.407792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.407960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.407991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.408955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.408972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.409910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.409942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.410122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.410152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.410258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.410275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.433 qpair failed and we were unable to recover it. 00:31:11.433 [2024-10-14 17:48:10.410352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.433 [2024-10-14 17:48:10.410369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.410461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.410479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.410638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.410657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.410863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.410880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.410954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.410972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.411947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.411983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.412153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.412183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.412358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.412376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.412563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.412593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.412780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.412812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.412919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.412949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.413058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.413090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.413340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.413371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.413613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.413644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.413846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.413863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.413941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.413958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.414941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.414958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.415948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.415965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.416108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.416125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.416214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.416231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.416318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.416336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.434 [2024-10-14 17:48:10.416481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.434 [2024-10-14 17:48:10.416499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.434 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.416680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.416712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.416898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.416928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.417919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.417950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.418137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.418167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.418273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.418309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.418483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.418513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.418618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.418650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.418892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.418923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.419105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.419135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.419261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.419292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.419527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.419576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.419705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.419736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.419924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.419954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.420133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.420165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.420412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.420442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.420653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.420686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.420807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.420839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.421883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.421913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.422892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.422922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.423044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.435 [2024-10-14 17:48:10.423075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.435 qpair failed and we were unable to recover it. 00:31:11.435 [2024-10-14 17:48:10.423202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.423233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.423434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.423466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.423654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.423687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.423793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.423824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.423936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.423968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.424136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.424166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.424421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.424452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.424552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.424583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.424790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.424821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.425001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.425031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.425143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.425175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.425363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.425395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.425624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.425657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.425776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.425807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.426065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.426102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.426377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.426408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.426620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.426652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.426772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.426803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.426994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.427205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.427331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.427527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.427753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.427952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.427982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.428163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.428194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.428358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.428389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.428596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.428637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.428817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.428847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.429053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.429194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.429358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.429549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.429780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.429987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.430018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.430215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.430245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.430413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.430444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.430631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.430664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.430873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.430903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.431020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.431051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.431253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.431283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.436 qpair failed and we were unable to recover it. 00:31:11.436 [2024-10-14 17:48:10.431452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.436 [2024-10-14 17:48:10.431483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.431664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.431697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.431935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.431965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.432218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.432248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.432433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.432464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.432649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.432682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.432870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.432902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.433016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.433048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.433226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.433256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.433429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.433460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.433704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.433736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.433850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.433881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.434064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.434094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.434190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.434219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.434408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.434444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.434689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.434722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.434821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.434851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.435081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.435153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.435350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.435386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.435556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.435589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.435793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.435827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.435998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.436205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.436340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.436488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.436703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.436957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.436989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.437158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.437190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.437406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.437438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.437567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.437598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.437784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.437816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.437939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.437970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.438139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.438170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.438295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.438327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.438444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.438476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.438593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.438635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.438920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.438952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.439131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.439162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.439373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.439405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.439620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.439654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.439852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.437 [2024-10-14 17:48:10.439884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.437 qpair failed and we were unable to recover it. 00:31:11.437 [2024-10-14 17:48:10.440093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.440128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.440311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.440340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.440449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.440480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.440649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.440682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.440941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.440971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.441098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.441129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.441246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.441277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.441452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.441483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.441743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.441775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.441945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.441976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.442110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.442141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.442366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.442396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.442635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.442668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.442917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.442953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.443134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.443164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.443353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.443384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.443568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.443607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.443872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.443902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.444012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.444043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.444242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.444273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.444479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.444510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.444641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.444674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.444913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.444944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.445127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.445157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.445264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.445295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.445503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.445533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.445717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.445750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.445938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.445969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.446084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.446115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.446283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.446314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.446487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.446518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.446636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.446670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.446880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.446912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.447050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.447343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.447471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.447712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.447864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.447976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.448007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.448252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.438 [2024-10-14 17:48:10.448283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.438 qpair failed and we were unable to recover it. 00:31:11.438 [2024-10-14 17:48:10.448544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.448635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.448840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.448876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.449082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.449115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.449309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.449341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.449514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.449545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.449803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.449836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.449940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.449972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.450184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.450216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.450407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.450439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.450630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.450663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.450849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.450880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.451072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.451104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.451282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.451314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.451504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.451543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.451653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.451686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.451895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.451928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.452156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.452187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.452367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.452398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.452587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.452628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.452806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.452837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.453016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.453047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.453220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.453251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.453517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.453548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.453665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.453697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.453911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.453943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.454125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.454155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.454442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.454473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.454692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.454725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.454849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.454880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.455056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.455088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.455334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.455365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.455501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.455533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.455730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.455762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.455947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.455978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.456160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.456191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.456317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.439 [2024-10-14 17:48:10.456349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.439 qpair failed and we were unable to recover it. 00:31:11.439 [2024-10-14 17:48:10.456588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.456627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.456817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.456849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.457111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.457143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.457269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.457301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.457554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.457585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.457879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.457912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.458033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.458063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.458195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.458227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.458359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.458390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.458631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.458664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.458858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.458890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.459006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.459037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.459296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.459328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.459523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.459555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.459782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.459814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.460054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.460202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.460355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.460522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.460797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.460982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.461014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.461195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.461225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.461338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.461369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.461639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.461671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.461862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.461894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.462081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.462117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.465624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.465678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.465902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.465940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.466191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.466229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.466440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.466476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.466625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.466661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.466914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.466947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.467062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.467095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.467286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.467325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.467474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.467509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.467699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.467735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.467940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.467977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.470625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.470677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.470876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.470906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.471139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.471173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.440 qpair failed and we were unable to recover it. 00:31:11.440 [2024-10-14 17:48:10.471352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.440 [2024-10-14 17:48:10.471383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.471559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.471588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.471726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.471755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.474617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.474659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.474863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.474891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.475931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.475958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.478615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.478649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.478801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.478820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.479001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.479021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.479213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.479234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.479389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.479409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.479591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.479616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.482611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.482640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.482812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.482832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.482992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.483879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.483896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.484051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.484070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.484227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.484247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.484343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.484360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.484458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.484476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.484651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.484673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.486611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.486636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.486823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.486844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.487934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.487951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.441 [2024-10-14 17:48:10.490609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.441 [2024-10-14 17:48:10.490627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.441 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.490706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.490718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.490791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.490803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.490891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.490903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.491971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.491983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.492065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.492078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.492214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.492229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.492349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.492363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.494609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.494628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.494699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.494710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.494860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.494875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.495963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.495976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.496061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.496073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.496218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.496234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.498609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.498631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.498778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.498792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.498921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.498934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.499919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.499929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.500076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.500088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.500149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.500159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.442 [2024-10-14 17:48:10.500288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.442 [2024-10-14 17:48:10.500301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.442 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.500367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.500378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.500582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.500596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.502608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.502640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.502878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.502892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.503965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.503977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.504704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.504718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.506608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.506623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.506843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.506854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.506991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.507927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.507938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.508085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.508096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.508183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.508193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.508386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.508397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.508458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.508468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.508538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.508548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.509971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.509981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.510108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.510118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.510262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.443 [2024-10-14 17:48:10.510273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.443 qpair failed and we were unable to recover it. 00:31:11.443 [2024-10-14 17:48:10.510514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.510524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.510609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.510620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.510685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.510694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.510892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.510905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.510986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.510995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.511919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.511929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.512966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.512975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.513929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.513938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.444 [2024-10-14 17:48:10.514129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.444 [2024-10-14 17:48:10.514139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.444 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.514942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.514996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.515965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.515973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.516936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.516996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.517953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.517962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.518019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.445 [2024-10-14 17:48:10.518028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.445 qpair failed and we were unable to recover it. 00:31:11.445 [2024-10-14 17:48:10.518086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.518983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.518995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.519945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.519956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.520977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.520989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.521911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.521922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.522879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.522891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.446 qpair failed and we were unable to recover it. 00:31:11.446 [2024-10-14 17:48:10.523000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.446 [2024-10-14 17:48:10.523012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.523950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.523962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.524872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.524884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.525943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.525955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.526906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.526944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.527910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.527922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.528053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.528065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.528269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.528281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.528426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.447 [2024-10-14 17:48:10.528438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.447 qpair failed and we were unable to recover it. 00:31:11.447 [2024-10-14 17:48:10.528529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.528541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.528684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.528696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.528824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.528836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.528907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.528934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.529964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.529980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.530156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.530331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.530518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.530663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.530813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.530995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.531202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.531366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.531669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.531819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.531946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.531962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.532079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.532245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.532505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.532778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.532915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.532996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.533012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.533089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.533105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.533280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.533296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.533506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.448 [2024-10-14 17:48:10.533522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.448 qpair failed and we were unable to recover it. 00:31:11.448 [2024-10-14 17:48:10.533599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.533621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.533759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.533774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.533971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.533987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.534977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.534993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.535828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.535860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.536141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.536362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.536549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.536722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.536824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.536997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.537960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.537976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.538874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.538890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.539039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.539054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.539246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.539267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.449 qpair failed and we were unable to recover it. 00:31:11.449 [2024-10-14 17:48:10.539420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.449 [2024-10-14 17:48:10.539441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.539633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.539655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.539754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.539775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.539989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.540932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.540953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.541052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.541072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.541337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.541368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.541490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.541521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.541657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.541690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.541874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.541895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.542083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.542118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.542294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.542324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.542448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.542478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.542667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.542708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.542968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.542998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.543257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.543288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.543478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.543508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.543690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.543723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.543896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.543937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.544114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.544135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.544230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.544251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.544420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.544440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.544612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.544634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.544856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.544887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.545009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.545042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.545238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.545268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.545435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.545465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.545668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.545701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.545871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.545901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.546105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.546136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.546253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.546284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.546523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.546554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.546772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.546804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.546923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.546953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.547194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.547225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.450 [2024-10-14 17:48:10.547343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.450 [2024-10-14 17:48:10.547373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.450 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.547471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.547501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.547823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.547893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.548071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.548142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.548348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.548384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.548684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.548721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.548996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.549028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.549214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.549246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.549437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.549469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.549663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.549696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.549876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.549907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.550106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.550137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.550261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.550294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.550462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.550493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.550624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.550657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.550896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.550936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.551120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.551152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.551363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.551394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.551563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.551596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.551781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.551813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.552055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.552085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.451 [2024-10-14 17:48:10.552220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.451 [2024-10-14 17:48:10.552251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.451 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.552423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.552455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.552696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.552730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.552988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.553019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.553216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.553248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.553507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.553538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.553739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.553772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.553906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.553936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.554074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.554106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.554290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.554321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.554509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.554540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.554680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.554712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.554914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.554946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.555121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.555151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.555335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.555367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.555486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.555517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.555686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.555719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.555896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.555926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.556098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.556130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.556314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.556344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.556579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.556622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.556751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.556806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.557066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.557333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.557480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.557641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.557875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.557974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.558963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.558994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.559111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.559141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.559259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.559290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.559470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.559503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.559770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.559803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.559939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.559971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.734 [2024-10-14 17:48:10.560143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.734 [2024-10-14 17:48:10.560174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.734 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.560287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.560318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.560504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.560535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.560645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.560678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.560940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.560972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.561090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.561122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.561385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.561416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.561609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.561641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.561828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.561860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.561979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.562016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.562215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.562247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.562441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.562472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.562716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.562748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.562994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.563025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.563236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.563268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.563383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.563414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.563522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.563554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.563747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.563788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.564060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.564092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.564273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.564305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.564476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.564508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.564620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.564663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.564870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.564905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.565084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.565116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.565296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.565328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.565450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.565482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.565680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.565713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.565893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.565924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.566217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.566249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.566359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.566390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.566561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.566592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.566800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.566832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.567083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.567114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.567352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.567384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.567506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.567537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.567730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.567763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.567934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.567972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.568229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.568260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.568445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.568477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.568596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.568637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.568841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.568872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.735 qpair failed and we were unable to recover it. 00:31:11.735 [2024-10-14 17:48:10.569068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.735 [2024-10-14 17:48:10.569099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.569272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.569303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.569472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.569504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.569649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.569682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.569923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.569954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.570162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.570193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.570319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.570351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.570545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.570577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.570857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.570888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.570996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.571151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.571374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.571542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.571691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.571890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.571920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.572106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.572138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.572377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.572409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.572539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.572571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.572773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.572805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.572990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.573218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.573377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.573524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.573697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.573831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.573863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.574039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.574069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.574239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.574269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.574466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.574497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.574750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.574784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.574906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.574937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.575107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.575138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.575311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.575341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.575578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.575643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.575827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.575857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.576058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.576089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.576343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.576373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.576587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.576626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.576757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.576787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.576981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.577013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.577127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.577158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.577281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.736 [2024-10-14 17:48:10.577311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.736 qpair failed and we were unable to recover it. 00:31:11.736 [2024-10-14 17:48:10.577506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.577538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.577801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.577832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.578023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.578054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.578299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.578331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.578514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.578546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.578767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.578799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.578974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.579004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.579112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.579142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.579327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.579359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.579568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.579626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.579734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.579765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.580030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.580060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.580197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.580227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.580412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.580443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.580685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.580718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.580913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.580944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.581071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.581101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.581295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.581326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.581566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.581597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.581730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.581761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.581933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.581964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.582219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.582251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.582502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.582533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.582743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.582775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.583033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.583064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.583199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.583231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.583416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.583447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.583683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.583715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.583950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.583981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.584156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.584186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.584373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.584402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.584503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.584534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.584711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.584743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.584950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.584982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.585244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.585275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.585397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.585428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.585551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.585582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.585861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.585892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.586064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.586094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.586282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.586313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.737 [2024-10-14 17:48:10.586433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.737 [2024-10-14 17:48:10.586464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.737 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.586578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.586618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.586738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.586769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.586978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.587135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.587283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.587550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.587798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.587949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.587980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.588967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.588998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.589195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.589228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.589465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.589496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.589706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.589738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.589935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.589966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.590148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.590178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.590353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.590384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.590646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.590679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.590934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.590965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.591091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.591123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.591409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.591441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.591646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.591679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.591846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.591877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.592140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.592170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.592361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.592392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.592512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.592542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.592674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.592706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.592831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.592863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.593106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.593144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.593333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.738 [2024-10-14 17:48:10.593364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.738 qpair failed and we were unable to recover it. 00:31:11.738 [2024-10-14 17:48:10.593477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.593508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.593747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.593794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.593995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.594037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.594175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.594207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.594382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.594413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.594677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.594725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.594905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.594938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.595078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.595109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.595211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.595242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.595442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.595474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.595659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.595691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.595936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.595968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.596084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.596114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.596362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.596393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.596572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.596612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.596806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.596838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.597050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.597082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.597317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.597348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.597463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.597493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.597731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.597765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.597952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.597982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.598109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.598140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.598330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.598362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.598598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.598639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.598902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.598934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.599196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.599228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.599350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.599381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.599499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.599529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.599655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.599688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.599929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.599961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.600142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.600173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.600284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.600315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.600500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.600530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.600702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.600734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.739 [2024-10-14 17:48:10.600853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.739 [2024-10-14 17:48:10.600885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.739 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.601090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.601121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.601307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.601337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.601519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.601549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.601770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.601804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.602042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.602073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.602193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.602224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.602399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.602432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.602562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.602592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.602896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.602965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.603178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.603213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.603336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.603368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.603476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.603506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.603703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.603737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.603973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.604004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.604238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.604270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.604451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.604481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.604618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.604650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.604856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.604886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.605067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.605098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.605277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.605308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.605544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.605575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.605706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.605747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.605956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.605987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.606219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.606250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.606450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.606480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.606651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.606684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.606814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.606845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.607056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.607203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.607419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.607551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.607784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.607989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.608020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.608205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.608236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.608438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.608468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.608657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.608691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.608897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.608929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.609054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.609085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.609274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.609305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.609498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.740 [2024-10-14 17:48:10.609529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.740 qpair failed and we were unable to recover it. 00:31:11.740 [2024-10-14 17:48:10.609721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.609753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.609867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.609898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.610070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.610101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.610339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.610369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.610544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.610576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.610775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.610807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.611073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.611104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.611236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.611268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.611456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.611488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.611673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.611705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.611941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.611972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.612210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.612242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.612425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.612456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.612695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.612729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.612914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.612946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.613154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.613185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.613319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.613350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.613531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.613563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.613782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.613814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.613995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.614026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.614201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.614232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.614402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.614440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.614619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.614651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.614913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.614943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.615177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.615207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.615386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.615417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.615673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.615706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.615811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.615841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.616081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.616111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.616374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.616405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.616645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.616677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.616847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.616878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.617930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.617962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.618076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.618107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.618221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.618251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.741 qpair failed and we were unable to recover it. 00:31:11.741 [2024-10-14 17:48:10.618357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.741 [2024-10-14 17:48:10.618388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.618515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.618546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.618728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.618759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.618925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.618956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.619896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.619927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.620111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.620142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.620353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.620384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.620558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.620589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.620807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.620838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.620953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.620984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.621188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.621219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.621353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.621384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.621567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.621598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.621809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.621840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.622061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.622197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.622444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.622612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.622750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.622992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.623023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.623239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.623270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.623399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.623431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.623647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.623680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.623945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.623976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.624152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.624183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.624372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.624403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.624577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.624619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.624809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.624840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.625024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.625055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.625180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.625210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.625339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.625370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.625553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.625585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.625801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.625832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.626018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.626050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.626153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.626185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.742 [2024-10-14 17:48:10.626456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.742 [2024-10-14 17:48:10.626487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.742 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.626663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.626696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.626884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.626915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.627112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.627143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.627324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.627355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.627588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.627630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.627799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.627831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.628003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.628034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.628242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.628312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.628455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.628491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.628630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.628666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.628875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.628906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.629025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.629056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.629319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.629350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.629553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.629583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.629784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.629815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.630926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.630957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.631080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.631112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.631299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.631329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.631563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.631594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.631760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.631792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.631910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.631941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.632121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.632153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.632324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.632354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.632529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.632561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.632809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.632841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.633022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.633053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.633246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.633277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.633481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.633512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.633698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.633731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.633999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.634037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.743 qpair failed and we were unable to recover it. 00:31:11.743 [2024-10-14 17:48:10.634155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.743 [2024-10-14 17:48:10.634185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.634311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.634340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.634541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.634571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.634755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.634789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.634915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.634948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.635208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.635238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.635367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.635398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.635619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.635652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.635758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.635789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.635983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.636014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.636196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.636225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.636396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.636427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.636612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.636644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.636872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.636903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.637201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.637351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.637562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.637736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.637892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.637999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.638029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.638214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.638245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.638493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.638523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.638651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.638684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.638925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.638955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.639125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.639156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.639260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.639290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.639479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.639514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.639727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.639760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.639938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.639968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.640140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.640172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.640292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.640323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.640534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.640565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.640776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.640809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.640998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.641149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.641346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.641553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.641719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.641941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.641973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.642143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.642174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.642384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.744 [2024-10-14 17:48:10.642416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.744 qpair failed and we were unable to recover it. 00:31:11.744 [2024-10-14 17:48:10.642622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.642654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.642786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.642818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.642994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.643025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.643155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.643185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.643407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.643439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.643702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.643736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.643919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.643951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.644135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.644167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.644424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.644456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.644636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.644669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.644841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.644872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.645092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.645124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.645324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.645357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.645536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.645567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.645686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.645718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.645835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.645866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.646038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.646069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.646313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.646344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.646554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.646585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.646722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.646753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.646990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.647021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.647214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.647247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.647456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.647488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.647678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.647711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.647986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.648017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.648279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.648316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.648450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.648482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.648679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.648712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.648825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.648856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.649939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.649971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.650141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.650176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.650414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.650452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.650636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.650668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.650805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.650836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.650979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.651010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.745 [2024-10-14 17:48:10.651193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.745 [2024-10-14 17:48:10.651224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.745 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.651469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.651501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.651675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.651706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.651900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.651931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.652095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.652252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.652480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.652647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.652797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.652969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.653001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.653208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.653239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.653364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.653396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.653522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.653554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.653744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.653776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.653970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.654171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.654373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.654520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.654691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.654961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.654992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.655104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.655135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.655252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.655284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.655400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.655432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.655673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.655706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.655880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.655912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.656025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.656062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.656323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.656355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.656490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.656522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.656722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.656755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.656927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.656958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.657130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.657161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.657337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.657369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.657629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.657662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.657851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.657882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.658002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.658033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.658281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.658313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.658562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.658594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.658721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.658753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.658926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.658958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.659200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.659232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.659416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.746 [2024-10-14 17:48:10.659447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.746 qpair failed and we were unable to recover it. 00:31:11.746 [2024-10-14 17:48:10.659580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.659621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.659878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.659911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.660091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.660122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.660384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.660416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.660611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.660644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.660934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.660965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.661155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.661186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.661373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.661404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.661611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.661644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.661843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.661874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.662064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.662096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.662356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.662389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.662502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.662534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.662795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.662828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.662960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.662990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.663231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.663262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.663376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.663407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.663597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.663658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.663850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.663882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.664858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.664896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.665023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.665054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.665288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.665320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.665492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.665523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.665696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.665728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.665914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.665946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.666204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.666236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.666364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.666405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.666594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.666638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.666822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.666854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.667138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.667169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.667403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.667435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.667624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.667679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.667872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.667904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.668106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.668138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.668409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.747 [2024-10-14 17:48:10.668440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.747 qpair failed and we were unable to recover it. 00:31:11.747 [2024-10-14 17:48:10.668640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.668672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.668798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.668829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.668952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.668982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.669117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.669149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.669328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.669360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.669566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.669596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.669713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.669743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.669876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.669907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.670094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.670124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.670384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.670415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.670591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.670629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.670818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.670849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.670962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.670994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.671097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.671128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.671310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.671342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.671469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.671500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.671686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.671719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.671893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.671925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.672045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.672076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.672271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.672302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.672487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.672518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.672787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.672818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.673000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.673031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.673227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.673258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.673440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.673476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.673593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.673633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.673859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.673890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.674074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.674105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.674341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.674372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.674572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.674609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.674785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.674816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.675061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.675092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.675210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.675240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.675442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.675472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.675655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.675688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.675820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.748 [2024-10-14 17:48:10.675852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.748 qpair failed and we were unable to recover it. 00:31:11.748 [2024-10-14 17:48:10.676039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.676070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.676277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.676308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.676425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.676457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.676718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.676750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.676924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.676955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.677927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.677958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.678076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.678108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.678291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.678323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.678435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.678466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.678635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.678668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.678863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.678896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.679074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.679105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.679288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.679320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.679435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.679466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.679652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.679685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.679944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.679976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.680217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.680250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.680378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.680410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.680674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.680707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.680894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.680926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.681036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.681066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.681235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.681267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.681375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.681406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.681588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.681634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.681809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.681840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.682919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.682949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.683213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.683245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.683411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.683443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.683554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.683584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.683723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.683756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.749 [2024-10-14 17:48:10.683875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.749 [2024-10-14 17:48:10.683907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.749 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.684154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.684184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.684380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.684412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.684592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.684637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.684741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.684773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.684893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.684925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.685117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.685150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.685326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.685357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.685476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.685507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.685682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.685716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.685887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.685919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.686961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.686993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.687953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.687984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.688183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.688215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.688321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.688352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.688464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.688495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.688637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.688670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.688871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.688904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.689144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.689181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.689311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.689343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.689578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.689620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.689738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.689770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.689882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.689912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.690196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.690228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.690467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.690499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.690691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.690724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.690935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.690966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.691147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.691179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.691426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.691457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.691576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.691614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.691728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.750 [2024-10-14 17:48:10.691760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.750 qpair failed and we were unable to recover it. 00:31:11.750 [2024-10-14 17:48:10.691863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.691895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.692022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.692054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.692224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.692256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.692379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.692411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.692664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.692696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.692815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.692847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.693104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.693136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.693307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.693338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.693519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.693551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.693742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.693775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.693906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.693937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.694045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.694076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.694264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.694297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.694531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.694563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.694770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249fbb0 is same with the state(6) to be set 00:31:11.751 [2024-10-14 17:48:10.695057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.695126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.695251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.695289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.695432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.695465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.695570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.695623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.695801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.695834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.696037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.696069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.696325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.696357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.696478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.696511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.696754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.696789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.696967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.696999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.697179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.697209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.697324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.697356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.697620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.697652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.697784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.697817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.697930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.697962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.698089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.698121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.698227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.698258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.698440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.698471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.698713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.698746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.698857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.698887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.699126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.699158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.699359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.699390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.699564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.699596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.699780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.699811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.700007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.700039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.700147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.751 [2024-10-14 17:48:10.700178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.751 qpair failed and we were unable to recover it. 00:31:11.751 [2024-10-14 17:48:10.700282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.700319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.700442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.700473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.700722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.700756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.700954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.700985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.701107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.701139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.701245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.701277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.701446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.701477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.701723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.701756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.701973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.702929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.702962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.703135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.703167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.703286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.703317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.703578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.703630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.703807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.703838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.703948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.703978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.704218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.704250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.704457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.704490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.704662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.704696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.704879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.704910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.705065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.705267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.705494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.705711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.705872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.705996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.706028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.706198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.706230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.706447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.706478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.706682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.706715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.706888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.706918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.707101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.752 [2024-10-14 17:48:10.707133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.752 qpair failed and we were unable to recover it. 00:31:11.752 [2024-10-14 17:48:10.707256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.707287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.707459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.707491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.707669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.707703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.707879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.707910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.708172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.708203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.708454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.708492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.708626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.708660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.708779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.708810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.708988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.709020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.709194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.709225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.709339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.709370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.709619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.709652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.709882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.709913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.710093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.710125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.710237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.710268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.710408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.710439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.710680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.710713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.710956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.710987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.711157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.711189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.711380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.711412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.711648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.711681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.711790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.711822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.711997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.712029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.712266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.712298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.712421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.712453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.712629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.712663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.712859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.712889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.713129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.713161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.713269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.713300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.713426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.713458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.713659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.713692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.713878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.713909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.714035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.714067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.714245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.714276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.714540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.714573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.714757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.714794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.714912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.714943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.715143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.715174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.715347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.715378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.715487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.753 [2024-10-14 17:48:10.715519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.753 qpair failed and we were unable to recover it. 00:31:11.753 [2024-10-14 17:48:10.715717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.715749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.715855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.715887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.716024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.716056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.716225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.716256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.716513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.716545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.716676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.716716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.716887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.716917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.717098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.717129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.717299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.717330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.717519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.717550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.717809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.717841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.717944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.717975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.718217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.718248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.718423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.718453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.718688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.718721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.718897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.718928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.719144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.719176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.719349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.719380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.719549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.719580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.719778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.719809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.719979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.720009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.720176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.720208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.720328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.720359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.720546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.720577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.720724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.720756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.721006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.721037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.721302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.721333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.721456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.721488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.721671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.721704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.721916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.721946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.722118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.722149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.722330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.722362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.722562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.722593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.722740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.722772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.722893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.722925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.723058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.723089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.723351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.723382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.723591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.723633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.723831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.723863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.723976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.754 [2024-10-14 17:48:10.724007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.754 qpair failed and we were unable to recover it. 00:31:11.754 [2024-10-14 17:48:10.724129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.724159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.724286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.724318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.724556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.724588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.724713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.724743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.724912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.724944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.725126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.725162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.725411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.725443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.725620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.725653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.725761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.725792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.725901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.725931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.726087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.726117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.726302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.726334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.726500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.726530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.726715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.726748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.726992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.727024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.727269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.727300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.727413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.727445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.727619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.727652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.727834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.727865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.728003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.728035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.728316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.728346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.728539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.728570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.728768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.728799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.729066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.729098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.729278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.729310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.729498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.729530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.729727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.729760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.729891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.729922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.730099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.730130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.730342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.730372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.730551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.730582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.730861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.730894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.731021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.731053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.731223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.731254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.731495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.731527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.731698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.731730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.731916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.731947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.732122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.732153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.732266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.732297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.755 [2024-10-14 17:48:10.732407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.755 [2024-10-14 17:48:10.732438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.755 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.732611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.732643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.732824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.732854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.733089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.733121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.733221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.733253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.733423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.733455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.733697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.733735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.733907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.733938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.734133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.734163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.734438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.734469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.734665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.734697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.734815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.734846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.735028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.735059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.735298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.735328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.735497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.735528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.735704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.735738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.735858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.735889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.736055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.736086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.736281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.736312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.736499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.736530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.736714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.736747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.736915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.736946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.737119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.737149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.737269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.737299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.737402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.737433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.737611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.737642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.737904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.737936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.738119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.738150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.738326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.738356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.738535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.738565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.738765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.738797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.738984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.739015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.756 [2024-10-14 17:48:10.739192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.756 [2024-10-14 17:48:10.739222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.756 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.739421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.739452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.739633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.739666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.739851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.739882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.740905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.740936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.741110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.741141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.741378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.741409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.741653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.741686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.741859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.741890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.742070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.742106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.742344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.742375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.742620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.742653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.742822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.742852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.743119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.743150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.743328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.743360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.743488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.743519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.743626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.743658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.743867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.743898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.744080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.744111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.744232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.744263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.744450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.744482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.744595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.744654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.744851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.744883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.745946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.745977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.746215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.746245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.746514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.746544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.746684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.746716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.746839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.757 [2024-10-14 17:48:10.746870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.757 qpair failed and we were unable to recover it. 00:31:11.757 [2024-10-14 17:48:10.747106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.747136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.747260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.747290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.747394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.747425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.747545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.747576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.747770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.747802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.748043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.748075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.748210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.748241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.748360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.748391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.748507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.748539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.748801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.748835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.749033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.749063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.749205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.749236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.749416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.749447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.749578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.749619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.749800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.749831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.750898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.750928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.751141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.751173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.751432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.751463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.751654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.751686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.751791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.751822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.752010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.752040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.752207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.752238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.752432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.752467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.752644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.752676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.752846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.752875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.753075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.753107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.753285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.753316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.753556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.753588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.753728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.753760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.753931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.753963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.754080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.758 [2024-10-14 17:48:10.754111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.758 qpair failed and we were unable to recover it. 00:31:11.758 [2024-10-14 17:48:10.754215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.754246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.754378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.754410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.754699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.754732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.754924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.754954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.755068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.755098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.755283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.755314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.755554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.755584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.755777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.755809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.755991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.756022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.756147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.756178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.756280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.756311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.756523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.756554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.756795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.756828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.757033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.757257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.757455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.757643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.757789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.757977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.758007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.758131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.758161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.758373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.758409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.758630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.758662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.758789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.758819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.759917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.759948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.760186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.760216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.760393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.760424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.760554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.760584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.760797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.760829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.760947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.760979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.761258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.761290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.761404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.761435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.761558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.761589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.761840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.761871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.762052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.762084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.759 [2024-10-14 17:48:10.762266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.759 [2024-10-14 17:48:10.762297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.759 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.762564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.762595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.762808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.762840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.763114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.763263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.763403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.763546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.763756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.763987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.764057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.764277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.764311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.764489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.764521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.764782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.764816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.764938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.764969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.765147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.765179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.765302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.765332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.765527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.765560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.765838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.765870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.766056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.766088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.766211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.766242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.766416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.766448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.766661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.766692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.766935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.766975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.767086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.767116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.767217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.767247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.767445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.767476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.767764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.767796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.767969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.768000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.768175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.768205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.768395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.768427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.768622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.768654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.768821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.768852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.769029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.769061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.769247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.769277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.769448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.769480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.769651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.769683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.769807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.769838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.760 [2024-10-14 17:48:10.770013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.760 [2024-10-14 17:48:10.770045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.760 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.770156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.770188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.770357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.770389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.770571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.770612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.770788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.770820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.770955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.770987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.771089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.771121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.771303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.771335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.771460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.771491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.771685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.771718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.771902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.771933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.772190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.772221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.772397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.772430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.772670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.772703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.772842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.772874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.773011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.773043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.773294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.773326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.773501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.773532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.773648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.773681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.773790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.773821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.774026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.774057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.774303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.774334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.774507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.774538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.774771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.774802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.774940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.774971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.775184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.775227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.775334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.775365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.775482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.775512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.775718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.775751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.775861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.775892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.776011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.776042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.776176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.776207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.776391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.776421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.776532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.776563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.776789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.776821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.761 [2024-10-14 17:48:10.777016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.761 [2024-10-14 17:48:10.777047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.761 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.777294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.777325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.777513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.777544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.777726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.777758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.777951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.777984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.778090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.778121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.778236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.778267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.778381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.778413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.778591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.778633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.778891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.778923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.779042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.779073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.779243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.779274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.779444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.779476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.779651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.779684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.779818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.779849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.780043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.780074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.780203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.780235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.780547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.780635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.780785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.780821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.780933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.780965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.781136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.781168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.781407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.781437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.781628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.781662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.781772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.781803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.781908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.781939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.782110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.782141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.782318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.782348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.782535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.782566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.782693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.782726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.782843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.782873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.783064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.783105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.783314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.783344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.783534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.783565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.783685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.783722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.783990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.784021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.784160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.784190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.762 [2024-10-14 17:48:10.784444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.762 [2024-10-14 17:48:10.784474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.762 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.784714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.784746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.784987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.785017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.785204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.785234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.785347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.785378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.785530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.785560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.785772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.785805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.786047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.786078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.786263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.786294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.786469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.786501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.786624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.786658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.786849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.786879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.787075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.787106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.787317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.787348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.787464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.787495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.787736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.787769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.787951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.787981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.788101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.788131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.788366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.788398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.788580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.788619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.788797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.788828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.789050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.789120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.789268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.789304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.789492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.789525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.789683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.789717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.789898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.789928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.790193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.790224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.790355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.790385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.790562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.790592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.790706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.790738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.790906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.790935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.791153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.791184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.791307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.791338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.791457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.791487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.791670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.791711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.791896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.791928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.792046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.792078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.792317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.792348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.792589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.792631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.792821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.792853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.792979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.763 [2024-10-14 17:48:10.793010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.763 qpair failed and we were unable to recover it. 00:31:11.763 [2024-10-14 17:48:10.793128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.793160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.793343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.793374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.793563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.793594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.793722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.793753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.793878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.793908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.794108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.794268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.794473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.794635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.794856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.794973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.795130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.795342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.795491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.795726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.795931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.795961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.796146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.796177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.796448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.796480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.796664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.796696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.796937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.796967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.797101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.797134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.797273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.797304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.797496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.797526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.797713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.797747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.797882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.797912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.798029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.798060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.798175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.798206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.798390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.798421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.798657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.798689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.798865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.798896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.799136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.799167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.799339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.799370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.799477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.799508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.799679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.799718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.800004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.800035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.800217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.800247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.800507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.800538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.800721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.800753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.800859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.800889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.801134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.764 [2024-10-14 17:48:10.801165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.764 qpair failed and we were unable to recover it. 00:31:11.764 [2024-10-14 17:48:10.801295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.801326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.801457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.801488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.801609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.801642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.801851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.801883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.802053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.802083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.802258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.802289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.802553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.802584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.802707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.802740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.802968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.802999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.803123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.803153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.803332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.803363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.803596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.803649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.803836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.803867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.804079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.804110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.804235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.804266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.804458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.804489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.804594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.804651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.804777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.804808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.805075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.805107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.805306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.805337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.805523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.805561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.805756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.805788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.805910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.805941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.806116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.806148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.806321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.806351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.806524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.806555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.806699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.806732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.806930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.806959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.807143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.807174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.807433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.807463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.807633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.807666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.807910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.807941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.808125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.808156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.808423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.808455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.808642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.808675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.765 qpair failed and we were unable to recover it. 00:31:11.765 [2024-10-14 17:48:10.808858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.765 [2024-10-14 17:48:10.808889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.809172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.809203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.809316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.809346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.809519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.809549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.809681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.809713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.809901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.809930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.810175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.810206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.810325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.810355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.810622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.810653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.810857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.810887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.811143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.811173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.811355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.811386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.811507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.811537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.811747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.811779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.811907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.811938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.812125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.812156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.812259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.812289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.812467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.812498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.812682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.812714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.812889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.812919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.813895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.813931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.814104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.814134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.814320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.814350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.814468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.814498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.814679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.814711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.814829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.814859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.815097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.815127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.815259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.815289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.815457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.815488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.815686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.815718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.815837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.815868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.816054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.816085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.816364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.816394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.816512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.816543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.816765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.816797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.766 [2024-10-14 17:48:10.816966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.766 [2024-10-14 17:48:10.816998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.766 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.817136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.817166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.817278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.817309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.817496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.817526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.817795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.817827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.818011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.818041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.818223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.818254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.818469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.818499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.818717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.818749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.819014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.819044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.819163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.819193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.819406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.819437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.819649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.819682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.819880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.819911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.820172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.820204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.820414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.820445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.820701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.820733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.820925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.820956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.821074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.821105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.821293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.821324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.821449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.821481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.821612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.821643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.821823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.821854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.822033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.822064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.822233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.822263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.822387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.822423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.822674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.822705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.822913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.822944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.823138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.823170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.823454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.823485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.823679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.823712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.823902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.823933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.824117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.824148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.824257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.824289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.824458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.824489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.824747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.824780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.824967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.824998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.825178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.825208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.825403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.825434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.825619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.825652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.767 qpair failed and we were unable to recover it. 00:31:11.767 [2024-10-14 17:48:10.825857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.767 [2024-10-14 17:48:10.825889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.826128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.826158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.826327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.826357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.826614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.826646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.826786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.826816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.827090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.827121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.827306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.827337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.827510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.827541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.827729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.827761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.827979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.828009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.828190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.828221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.828406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.828436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.828707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.828741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.828863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.828894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.829085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.829116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.829377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.829408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.829593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.829632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.829811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.829842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.830032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.830064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.830237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.830268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.830458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.830490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.830668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.830701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.830884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.830916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.831089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.831121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.831335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.831367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.831557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.831594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.831893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.831924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.832092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.832123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.832321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.832352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.832638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.832671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.832855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.832886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.833132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.833163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.833351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.833382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.833561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.833593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.833853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.833884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.834056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.834087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.834294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.834325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.834456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.834487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.834673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.834706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.834884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.834916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.768 [2024-10-14 17:48:10.835103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.768 [2024-10-14 17:48:10.835135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.768 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.835317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.835348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.835463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.835494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.835610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.835643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.835926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.835957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.836956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.836987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.837100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.837132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.837311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.837343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.837611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.837644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.837837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.837868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.838105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.838136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.838334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.838366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.838609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.838643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.838822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.838853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.838961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.838992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.839208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.839239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.839434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.839465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.839636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.839669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.839879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.839911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.840022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.840054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.840253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.840289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.840531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.840563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.840782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.840815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.841072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.841103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.841298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.841330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.841529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.841560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.841774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.841806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.842083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.842114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.842319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.842350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.842542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.842573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.842774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.842806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.842919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.842950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.843083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.843114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.843308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.843339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.843540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.843572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.843867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.769 [2024-10-14 17:48:10.843937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.769 qpair failed and we were unable to recover it. 00:31:11.769 [2024-10-14 17:48:10.844093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.844130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.844305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.844338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.844467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.844499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.844715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.844749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.845016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.845047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.845236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.845268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.845398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.845430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.845671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.845705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.845906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.845938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.846144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.846176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.846288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.846320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.846515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.846548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.846731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.846773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.846876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.846907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.847908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.847940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.848129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.848160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.848330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.848362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.848495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.848527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.848709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.848742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.848922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.848959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.849076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.849109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.849295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.849326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.849456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.849488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.849735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.849769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.849884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.849915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.850026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.850058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.850242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.850273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.850536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.850567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:11.770 [2024-10-14 17:48:10.850770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.770 [2024-10-14 17:48:10.850803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:11.770 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.851005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.851037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.851305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.851337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.851573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.851614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.851826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.851857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.852134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.852166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.852377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.852408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.852574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.852616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.852739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.852770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.852892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.852924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.853112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.853144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.853335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.853367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.853625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.853658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.853829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.853861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.854118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.854150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.854385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.854418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.854608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.854640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.854879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.854911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.855119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.855152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.855426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.855458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.855652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.053 [2024-10-14 17:48:10.855685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.053 qpair failed and we were unable to recover it. 00:31:12.053 [2024-10-14 17:48:10.855893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.855925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.856139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.856171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.856360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.856392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.856581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.856625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.856801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.856833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.857050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.857279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.857447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.857615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.857783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.857965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.858002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.858109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.858141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.858315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.858347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.858625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.858658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.858898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.858929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.859111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.859143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.859418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.859449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.859580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.859618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.859800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.859832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.860022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.860054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.860239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.860271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.860458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.860491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.860679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.860712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.860898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.860929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.861190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.861222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.861401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.861434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.861553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.861584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.861810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.861842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.861977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.862008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.862183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.862216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.862484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.862515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.862656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.862690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.862929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.862961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.863149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.863181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.863300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.863331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.863503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.863535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.863795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.863827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.054 [2024-10-14 17:48:10.864012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.054 [2024-10-14 17:48:10.864044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.054 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.864175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.864206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.864333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.864365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.864612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.864644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.864907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.864939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.865174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.865206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.865443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.865474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.865668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.865701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.865813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.865845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.866096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.866128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.866298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.866329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.866434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.866466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.866702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.866736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.866939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.866976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.867152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.867184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.867421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.867453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.867568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.867599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.867781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.867812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.867985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.868134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.868276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.868428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.868585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.868819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.868851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.869038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.869070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.869195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.869226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.869409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.869441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.869565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.869597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.869881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.869914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.870087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.870119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.870242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.870273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.870509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.870541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.870729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.870762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.871063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.871095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.871370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.871402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.871639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.871672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-10-14 17:48:10.871858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.055 [2024-10-14 17:48:10.871890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.872064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.872095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.872199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.872231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.872470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.872501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.872646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.872679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.872916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.872947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.873165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.873196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.873311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.873341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.873519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.873551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.873746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.873779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.873884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.873914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.874034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.874064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.874301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.874333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.874509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.874540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.874795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.874827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.875098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.875306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.875458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.875628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.875802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.875997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.876029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.876220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.876252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.876516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.876548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.876788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.876822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.877096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.877128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.877311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.877343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.877585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.877635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.877820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.877852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.878131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.878163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.878284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.878315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.878441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.878473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.878666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.878700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.878906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.878938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.879078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.879110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.879302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.879334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.879508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.879539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.879731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.879763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-10-14 17:48:10.879951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.056 [2024-10-14 17:48:10.879982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.880180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.880211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.880328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.880359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.880537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.880569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.880759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.880792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.881062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.881093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.881301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.881333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.881458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.881490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.881660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.881692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.881885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.881916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.882087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.882118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.882238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.882267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.882531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.882562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.882772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.882804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.882991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.883022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.883192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.883224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.883468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.883499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.883689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.883722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.883934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.883965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.884161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.884193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.884376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.884412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.884536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.884568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.884830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.884862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.885131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.885162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.885338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.885369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.885483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.885515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.885644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.885676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.885885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.885915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.886100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.886132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.886245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.886276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.886489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.886519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.886637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.886669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.886808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.886839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.887025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.887056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.887268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.887300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.887502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.887533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.887636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.057 [2024-10-14 17:48:10.887666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-10-14 17:48:10.887834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.887866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.888134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.888165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.888280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.888311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.888575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.888623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.888800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.888833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.889069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.889102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.889290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.889322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.889503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.889535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.889728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.889761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.889946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.889977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.890200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.890270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.890526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.890562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.890780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.890813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.891029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.891060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.891164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.891195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.891463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.891494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.891768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.891801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.891940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.891971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.892173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.892332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.892489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.892646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.892862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.892976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.893005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.893198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.893229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.893399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.893430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.893696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.893728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.893915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.893946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.058 qpair failed and we were unable to recover it. 00:31:12.058 [2024-10-14 17:48:10.894156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.058 [2024-10-14 17:48:10.894186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.894318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.894347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.894522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.894552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.894691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.894723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.894911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.894941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.895174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.895204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.895325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.895354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.895620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.895652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.895766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.895796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.895966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.896003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.896260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.896291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.896391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.896419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.896598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.896642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.896815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.896846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.896978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.897007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.897148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.897178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.897413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.897444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.897625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.897656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.897829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.897859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.898058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.898089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.898262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.898291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.898484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.898515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.898634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.898666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.898926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.898957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.899130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.899161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.899371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.899402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.899592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.899632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.899736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.899766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.899898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.899928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.900130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.900161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.900268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.900299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.900408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.900440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.900629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.900662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.900940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.900970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.901158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.901190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.901369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.901399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.901567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.901613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.901730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.901759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.059 [2024-10-14 17:48:10.901929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.059 [2024-10-14 17:48:10.901960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.059 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.902135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.902165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.902342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.902372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.902571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.902610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.902783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.902815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.902918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.902947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.903088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.903118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.903301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.903333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.903525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.903556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.903747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.903780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.903964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.903993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.904183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.904214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.904394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.904425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.904626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.904659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.904847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.904877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.905161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.905192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.905373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.905404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.905571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.905611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.905852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.905884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.906062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.906092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.906274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.906304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.906570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.906609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.906722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.906752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.906884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.906914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.907109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.907140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.907345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.907381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.907548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.907578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.907777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.907807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.907919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.907951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.908184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.908215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.908424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.908454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.908635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.908668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.908945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.908976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.909186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.909217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.909396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.909427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.909619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.909649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.909777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.909807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.909985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.910016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.910218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.910249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.910517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.060 [2024-10-14 17:48:10.910548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.060 qpair failed and we were unable to recover it. 00:31:12.060 [2024-10-14 17:48:10.910731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.910764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.910885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.910914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.911084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.911115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.911361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.911392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.911511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.911541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.911754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.911786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.912049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.912080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.912335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.912365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.912649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.912689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.912827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.912858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.913042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.913073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.913202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.913245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.913493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.913528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.913701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.913737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.913907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.913938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.914147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.914178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.914351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.914391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.914569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.914613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.914853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.914885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.914988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.915017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.915145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.915176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.915343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.915373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.915621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.915654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.915772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.915803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.915986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.916018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.916229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.916261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.916523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.916594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.916768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.916805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.916914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.916947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.917141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.917172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.917308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.917339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.917590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.917636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.917780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.917810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.917923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.917955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.918110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.918141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.918404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.918435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.918558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.918589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.918781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.918812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.919073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.919104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.919208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.061 [2024-10-14 17:48:10.919237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.061 qpair failed and we were unable to recover it. 00:31:12.061 [2024-10-14 17:48:10.919438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.919469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.919730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.919763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.919893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.919925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.920196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.920345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.920508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.920680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.920818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.920988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.921020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.921307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.921338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.921549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.921579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.921802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.921834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.922084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.922117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.922315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.922347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.922542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.922573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.922736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.922805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.923103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.923140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.923406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.923438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.923557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.923589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.923736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.923767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.923959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.923990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.924223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.924253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.924491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.924522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.924699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.924731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.924857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.924888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.925090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.925121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.925341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.925379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.925554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.925585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.925775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.062 [2024-10-14 17:48:10.925806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.062 qpair failed and we were unable to recover it. 00:31:12.062 [2024-10-14 17:48:10.925989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.926019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.926187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.926219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.926486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.926518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.926635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.926668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.926785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.926815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.927051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.927081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.927267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.927298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.927583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.927621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.927735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.927767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.927940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.927971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.928234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.928265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.928440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.928471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.928590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.928644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.928763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.928795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.929031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.929063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.929183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.929213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.929491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.929523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.929790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.929824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.929993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.930025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.930212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.930242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.930363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.930393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.930583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.930626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.930814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.930846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.931018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.931048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.931228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.931265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.931517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.931550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.931658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.931691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.931872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.931902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.932139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.932171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.932304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.932335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.932508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.932539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.932739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.932771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.933007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.933038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.933212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.933242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.933434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.933465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.933703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.933736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.933925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.063 [2024-10-14 17:48:10.933957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.063 qpair failed and we were unable to recover it. 00:31:12.063 [2024-10-14 17:48:10.934167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.934197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.934461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.934502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.934698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.934733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.934978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.935010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.935140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.935171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.935351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.935382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.935570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.935611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.935879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.935910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.936037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.936067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.936305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.936337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.936514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.936545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.936759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.936792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.936960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.936991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.937226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.937255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.937377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.937418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.937688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.937721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.937891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.937921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.938050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.938081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.938255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.938286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.938526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.938557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.938673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.938704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.938873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.938904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.939034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.939064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.939298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.939328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.939625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.939657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.939889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.939920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.940021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.940051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.940218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.940249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.940379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.940411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.940594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.940638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.940820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.940851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.941114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.941146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.941336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.941367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.941545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.941576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.941706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.941738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.941920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.941952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.064 [2024-10-14 17:48:10.942070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.064 [2024-10-14 17:48:10.942101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.064 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.942278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.942309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.942499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.942531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.942732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.942766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.942972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.943003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.943242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.943310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.943473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.943509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.943721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.943755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.943883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.943914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.944095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.944126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.944367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.944399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.944664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.944697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.944827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.944859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.945072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.945104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.945361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.945392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.945504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.945535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.945726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.945759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.945963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.945994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.946105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.946142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.946322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.946353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.946539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.946570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.946861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.946902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.947195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.947228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.947344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.947375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.947562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.947593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.947748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.947780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.947961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.947993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.948123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.948154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.948338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.948370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.948628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.948662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.948879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.948910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.949076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.949107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.949229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.949262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.949439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.949469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.949733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.949767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.949953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.949984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.950187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.950218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.950411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.950443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.950564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.950596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.065 [2024-10-14 17:48:10.950812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.065 [2024-10-14 17:48:10.950844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.065 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.950972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.951003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.951134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.951166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.951372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.951403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.951622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.951654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.951796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.951828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.952107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.952141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.952387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.952419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.952680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.952714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.952970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.953002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.953187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.953218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.953481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.953513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.953701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.953734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.953883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.953938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.954133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.954166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.954355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.954387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.954523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.954555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.954756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.954791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.954972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.955003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.955185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.955217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.955462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.955493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.955780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.955813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.955942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.955974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.956234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.956264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.956461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.956492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.956627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.956660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.956772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.956803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.956981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.957197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.957415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.957626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.957832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.957963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.957994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.958128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.958163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.066 qpair failed and we were unable to recover it. 00:31:12.066 [2024-10-14 17:48:10.958352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.066 [2024-10-14 17:48:10.958383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.958500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.958532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.958728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.958762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.958951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.958983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.959167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.959199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.959380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.959412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.959615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.959648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.959749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.959780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.959950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.959982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.960087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.960118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.960353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.960385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.960497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.960528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.960707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.960746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.960883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.960915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.961030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.961062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.961231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.961262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.961461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.961492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.961637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.961670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.961783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.961814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.962001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.962033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.962224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.962256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.962482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.962512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.962713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.962746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.962930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.962962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.963126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.963280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.963449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.963614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.963855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.963969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.964001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.964180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.964212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.964450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.964481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.964589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.964642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.067 [2024-10-14 17:48:10.964819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.067 [2024-10-14 17:48:10.964851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.067 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.965035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.965070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.965260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.965293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.965508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.965541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.965789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.965822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.965957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.965989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.966283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.966352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.966498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.966533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.966681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.966720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.967014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.967046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.967245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.967277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.967560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.967592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.967710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.967741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.967931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.967962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.968205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.968237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.968475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.968506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.968772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.968806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.969046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.969078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.969287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.969318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.969579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.969629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.969898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.969930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.970176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.970207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.970392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.970424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.970613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.970646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.970908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.970939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.971128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.971161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.971427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.971460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.971745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.971778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.972020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.972052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.972249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.972281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.972517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.972548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.068 qpair failed and we were unable to recover it. 00:31:12.068 [2024-10-14 17:48:10.972760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.068 [2024-10-14 17:48:10.972793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.973052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.973082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.973269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.973302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.973480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.973510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.973759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.973792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.974028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.974060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.974323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.974354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.974643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.974675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.974922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.974953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.975094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.975126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.975246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.975276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.975461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.975493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.975676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.975708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.975842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.975872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.976009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.976040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.976231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.976262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.976450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.976481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.976645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.976678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.976848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.976879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.977012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.977043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.977163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.977194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.977436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.977467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.977748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.977781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.977964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.977996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.978231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.978261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.978518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.978549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.978846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.978879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.979073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.979104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.979284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.979314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.979450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.979483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.979685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.979729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.979946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.979978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.980166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.980197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.980444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.980475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.980679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.980711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.069 [2024-10-14 17:48:10.980855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.069 [2024-10-14 17:48:10.980887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.069 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.981100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.981132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.981331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.981362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.981599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.981638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.981904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.981935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.982055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.982086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.982216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.982246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.982435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.982466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.982738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.982772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.982983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.983014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.983139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.983170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.983351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.983382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.983511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.983541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.983812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.983844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.984032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.984063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.984203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.984233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.984406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.984437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.984655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.984688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.984951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.984982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.985205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.985236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.985351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.985388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.985629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.985661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.985900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.985931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.986106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.986137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.986256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.986288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.986537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.986567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.986785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.986818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.987007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.987039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.987231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.987261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.987496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.987527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.987709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.987743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.987979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.988009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.988253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.988286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.988468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.988500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.988691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.988725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.988849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.988880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.989067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.070 [2024-10-14 17:48:10.989099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.070 qpair failed and we were unable to recover it. 00:31:12.070 [2024-10-14 17:48:10.989389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.989419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.989654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.989687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.989892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.989924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.990160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.990191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.990471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.990501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.990765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.990798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.991072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.991102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.991387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.991418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.991716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.991751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.991959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.991992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.992194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.992225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.992469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.992501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.992691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.992724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.992856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.992888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.993146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.993177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.993464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.993495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.993735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.993768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.994009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.994040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.994282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.994314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.994580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.994624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.994745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.994776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.994963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.994995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.995192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.995223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.995432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.995470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.995716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.995749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.995920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.995953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.996185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.996216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.996453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.996484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.996717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.996752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.997013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.997045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.997189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.997221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.997411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.997444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.997702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.997735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.997919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.997951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.998239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.998272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.998522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.071 [2024-10-14 17:48:10.998555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.071 qpair failed and we were unable to recover it. 00:31:12.071 [2024-10-14 17:48:10.998775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:10.998809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:10.998949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:10.998981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:10.999183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:10.999215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:10.999479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:10.999511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:10.999722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:10.999755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.000038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.000070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.000325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.000357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.000592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.000655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.000852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.000885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.001081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.001113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.001248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.001279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.001478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.001510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.001645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.001679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.001899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.001931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.002099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.002131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.002444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.002476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.002666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.002700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.002947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.002979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.003117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.003148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.003338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.003369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.003557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.003588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.003824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.003857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.003973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.004004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.004227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.004258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.004462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.004494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.004673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.004706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.004829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.004861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.005148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.005187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.005425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.005455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.005586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.005630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.005741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.005771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.005908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.005937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.072 qpair failed and we were unable to recover it. 00:31:12.072 [2024-10-14 17:48:11.006115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.072 [2024-10-14 17:48:11.006147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.006342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.006373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.006567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.006599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.006826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.006857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.007046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.007078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.007196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.007227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.007495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.007526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.007803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.007837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.007976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.008009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.008253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.008285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.008493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.008524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.008726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.008759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.008905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.008937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.009176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.009209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.009423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.009454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.009626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.009659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.009896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.009927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.010056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.010087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.010389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.010421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.010680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.010713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.010952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.010983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.011276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.011308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.011624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.011659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.011898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.011928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.012115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.012146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.012337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.012369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.012636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.012670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.012814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.012846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.013018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.013049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.013268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.013299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.013488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.013519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.013793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.013826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.014009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.014040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.014277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.014308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.014564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.014595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-10-14 17:48:11.014798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-10-14 17:48:11.014835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.014965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.014995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.015129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.015159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.015449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.015481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.015700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.015732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.015922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.015952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.016075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.016107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.016236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.016267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.016558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.016589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.016737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.016769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.016967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.016998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.017137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.017168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.017451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.017482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.017748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.017781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.017977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.018009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.018203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.018234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.018407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.018438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.018733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.018766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.018979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.019010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.019148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.019182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.019448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.019480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.019676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.019709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.019974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.020005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.020265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.020297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.020535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.020567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.020707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.020740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.020941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.020973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.021250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.021281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.021566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.021597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.021802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.021835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.022024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.022055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.022177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.022208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.022423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.022454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.022693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.022727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.022920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.022951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-10-14 17:48:11.023189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-10-14 17:48:11.023219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.023407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.023439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.023735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.023769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.023957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.023989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.024117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.024149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.024370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.024408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.024671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.024704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.024989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.025020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.025207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.025239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.025349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.025380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.025620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.025653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.025868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.025900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.026028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.026060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.026200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.026231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.026370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.026401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.026655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.026690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.026944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.026975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.027131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.027163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.027428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.027461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.027620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.027655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.027832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.027864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.028125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.028157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.028346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.028377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.028591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.028631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.028803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.028834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.029092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.029124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.029346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.029376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.029591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.029632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.029822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.029853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.030042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.030072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.030288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.030318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.030579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.030622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.030892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.030925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.031066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.031097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.031213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.031243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.031521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.031551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.031779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.031811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.031997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.032028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-10-14 17:48:11.032199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-10-14 17:48:11.032229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.032529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.032561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.032772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.032805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.033045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.033076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.033216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.033247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.033529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.033560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.033754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.033785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.034053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.034090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.034382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.034414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.034655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.034688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.034933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.034964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.035171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.035203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.035388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.035419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.035592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.035651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.035870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.035903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.036143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.036175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.036399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.036431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.036623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.036657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.036899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.036929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.037117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.037147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.037350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.037382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.037612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.037654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.037778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.037811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.038023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.038055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.038335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.038366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.038555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.038587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.038792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.038825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.038997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.039028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.039215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.039247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.039453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.039486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.039765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.039798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.040077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.040109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.040322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.040353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.040625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.040658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.040840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.040872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.041005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.041037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.041232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-10-14 17:48:11.041262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-10-14 17:48:11.041458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.041490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.041775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.041809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.042054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.042085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.042269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.042301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.042485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.042516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.042765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.042798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.042990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.043022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.043281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.043313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.043501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.043532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.043802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.043835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.043973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.044010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.044252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.044284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.044408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.044440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.044702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.044734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.044930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.044961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.045147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.045180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.045374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.045406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.045679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.045712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.045857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.045888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.046130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.046161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.046410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.046442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.046593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.046636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.046763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.046814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.046953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.046984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.047233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.047266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.047409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.047441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.047703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.047738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.047929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.047961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.048142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.048173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.048380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.048411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.048581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.048623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-10-14 17:48:11.048893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-10-14 17:48:11.048924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.049047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.049078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.049314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.049346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.049611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.049643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.049909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.049941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.050140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.050171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.050360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.050393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.050525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.050556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.050792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.050825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.051000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.051033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.051216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.051248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.051441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.051472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.051739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.051772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.051977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.052009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.052142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.052174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.052449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.052482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.052658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.052693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.052838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.052869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.053066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.053098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.053333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.053371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.053594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.053636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.053821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.053853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.054037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.054068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.054195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.054226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.054467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.054499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.054718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.054751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.055017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.055049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.055318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.055351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.055555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.055587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.055821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.055852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.056049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.056081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.056221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.056253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.056443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.056474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.056696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.056730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.056925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.056958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.057103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.057134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.057255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.057286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.057528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.057560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-10-14 17:48:11.057762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-10-14 17:48:11.057795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.057990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.058021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.058320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.058352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.058623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.058656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.058876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.058908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.059155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.059186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.059453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.059485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.059694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.059726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.059926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.059959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.060167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.060199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.060393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.060425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.060612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.060645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.060853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.060885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.061064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.061095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.061371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.061402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.061596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.061641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.061783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.061815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.062010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.062040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.062230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.062262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.062468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.062500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.062643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.062676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.062918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.062955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.063130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.063162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.063453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.063484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.063681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.063714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.063862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.063894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.064093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.064125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.064342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.064374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.064503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.064535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.064810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.064843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.065042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.065074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.065266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.065298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.065435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.065467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.065721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.065755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.066027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.066058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.066366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.066399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.066694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.066730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.066865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-10-14 17:48:11.066898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-10-14 17:48:11.067167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.067199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.067332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.067364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.067555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.067586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.067818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.067850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.068042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.068074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.068225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.068256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.068501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.068532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.068808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.068842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.069075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.069106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.069353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.069385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.069665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.069700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.069874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.069905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.070093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.070124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.070394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.070426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.070645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.070678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.070925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.070956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.071153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.071185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.071454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.071485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.071754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.071787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.071971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.072002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.072279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.072311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.072580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.072622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.072830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.072862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.073081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.073119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.073455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.073487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.073733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.073766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.074014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.074045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.074272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.074304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.074554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.074586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.074774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.074807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.075000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.075031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.075243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.075275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.075477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.075510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.075724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.075758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.075888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.075920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-10-14 17:48:11.076117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-10-14 17:48:11.076148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.076442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.076474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.076749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.076783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.076982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.077013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.077263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.077295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.077509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.077541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.077683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.077716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.077941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.077973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.078167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.078199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.078451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.078483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.078704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.078737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.078925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.078957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.079155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.079187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.079323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.079354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.079535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.079567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.079938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.080013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.080304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.080379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.080676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.080713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.080873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.080906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.081097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.081130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.081268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.081300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.081496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.081529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.081815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.081849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.082147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.082180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.082448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.082480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.082763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.082797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.082936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.082969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.083191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.083223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.083515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.083556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.083720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.083753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.084007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.084038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.084248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.084279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.084470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.084502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.084774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.084807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-10-14 17:48:11.084941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-10-14 17:48:11.084972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.085114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.085146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.085354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.085385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.085597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.085638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.085788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.085820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.085946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.085977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.086120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.086151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.086354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.086385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.086611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.086646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.086851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.086884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.087112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.087143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.087348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.087381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.087508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.087539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.087688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.087721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.087853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.087885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.088023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.088055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.088204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.088236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.088426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.088457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.088661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.088696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.088960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.088992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.089123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.089154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.089431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.089463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.089598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.089639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.089781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.089812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.089934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.089965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.090163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.090194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.090392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.090424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.090650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.090683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.090887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.090920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.091119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.091150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.091447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.091478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.091742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-10-14 17:48:11.091776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-10-14 17:48:11.092052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.092082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.092399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.092430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.092627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.092668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.092957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.092989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.093292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.093324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.093612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.093645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.093925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.093957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.094153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.094184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.094374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.094406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.094593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.094633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.094841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.094872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.095055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.095087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.095284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.095316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.095593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.095642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.095784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.095815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.095959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.095990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.096198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.096231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.096462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.096494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.096690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.096724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.096917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.096949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.097156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.097188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.097365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.097397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.097646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.097679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.097864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.097896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.098193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.098226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.098357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.098388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.098642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.098676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.098821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.098852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.098983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.099014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.099167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.099204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.099407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.099438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.099727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.099760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.099901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.099933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.100118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.100150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.100377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.100410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.100658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.100692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.100905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-10-14 17:48:11.100937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-10-14 17:48:11.101078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.101109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.101484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.101516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.101732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.101765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.101921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.101952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.102150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.102182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.102410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.102444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.102650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.102682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.102940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.102971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.103120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.103152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.103439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.103471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.103593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.103668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.103818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.103850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.104055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.104087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.104239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.104272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.104522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.104554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.104810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.104843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.105064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.105095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.105323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.105354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.105620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.105652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.105806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.105839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.106035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.106066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.106343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.106374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.106575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.106617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.106882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.106913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.107107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.107139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.107338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.107370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.107628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.107662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.107890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.107922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.108079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.108109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.108327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.108359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.108565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.108597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.108774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.108806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.109009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.109046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.109316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.109348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.109572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.109614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-10-14 17:48:11.109865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-10-14 17:48:11.109896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.110094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.110126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.110402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.110433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.110690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.110723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.110927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.110958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.111089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.111121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.111258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.111288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.111589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.111632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.111753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.111784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.111923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.111954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.112237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.112269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.112458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.112491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.112682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.112717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.112855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.112886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.113042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.113075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.113280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.113311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.113583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.113625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.113831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.113865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.113990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.114021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.114163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.114194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.114480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.114512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.114719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.114752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.114900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.114932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.115078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.115111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.115251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.115284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.115510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.115542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.115809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.115842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.116051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.116083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.116372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.116405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.116685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.116719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.116985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.117018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.117171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.117202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.117388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.117420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.117623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.117656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.117848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.117880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.118131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.118164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.118366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-10-14 17:48:11.118398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-10-14 17:48:11.118673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.118712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.118914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.118946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.119140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.119172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.119453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.119485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.119710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.119743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.120026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.120059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.120384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.120417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.120555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.120587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.120792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.120823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.121034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.121067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.121305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.121336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.121546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.121578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.121745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.121780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.121992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.122023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.122221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.122253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.122470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.122502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.122648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.122682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.122977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.123010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.123236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.123269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.123464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.123496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.123772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.123805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.123996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.124029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.124220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.124252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.124448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.124480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.124682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.124716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.124969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.125000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.125141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.125174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.125392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.125424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.125542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.125574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.125785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.125817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.126068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.126100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.126279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.126310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.126585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.126629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.126918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-10-14 17:48:11.126949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-10-14 17:48:11.127220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.127252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.127457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.127489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.127732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.127767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.127966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.127998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.128217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.128249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.128527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.128558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.128805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.128849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.129154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.129185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.129470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.129503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.129792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.129826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.130090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.130122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.130379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.130410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.130616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.130650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.130901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.130934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.131071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.131102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.131221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.131253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.131376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.131408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.131685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.131720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.131837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.131868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.132006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.132037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.132183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.132215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.132533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.132564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.132760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.132794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.133062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.133094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.133372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.133403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.133720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.133754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.133909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.133940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.134146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.134178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.134381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.134413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.134608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.134640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.134921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.134953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.135193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.135225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.135365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.135397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.135597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.135653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.135936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.135968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.136168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.136200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.136395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-10-14 17:48:11.136427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-10-14 17:48:11.136625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.136658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.136929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.136962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.137214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.137246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.137507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.137540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.137763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.137795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.137934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.137965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.138174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.138206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.138484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.138515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.138723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.138755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.138906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.138943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.139096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.139128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.139416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.139447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.139750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.139784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.140060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.140092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.140390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.140421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.140689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.140724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.140928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.140960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.141154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.141185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.141436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.141467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.141613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.141647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.141967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.141997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.142238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.142269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.142495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.142528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.142817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.142851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.143120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.143151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.143415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.143448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.143651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.143684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.143958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.143989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.144189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.144221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.144423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.144454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.144737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.144771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.144981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.145012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.145213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.145245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-10-14 17:48:11.145448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-10-14 17:48:11.145480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.145757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.145790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.146047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.146079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.146360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.146392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.146634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.146667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.146859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.146890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.147026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.147058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.147280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.147310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.147644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.147679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.147905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.147936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.148082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.148114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.148365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.148396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.148696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.148728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.148994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.149026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.149163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.149196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.149412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.149445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.149650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.149689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.149839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.149871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.150083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.150114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.150318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.150349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.150626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.150659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.150804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.150835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.151109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.151141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.151342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.151375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.151631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.151663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.151948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.151980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.152129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.152162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.152422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.152454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.152731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.152765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.152917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.152949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.153213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.153246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.153530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.153561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.153732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.153764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.154015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-10-14 17:48:11.154047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-10-14 17:48:11.154195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.154227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.154362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.154392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.154667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.154700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.154905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.154937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.155136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.155168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.155369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.155401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.155654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.155688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.155828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.155859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.155989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.156022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.156227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.156259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.156514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.156546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.156777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.156811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.156991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.157023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.157226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.157257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.157537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.157570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.157747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.157782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.157935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.157966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.158156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.158188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.158466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.158499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.158728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.158763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.158968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.159000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.159183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.159216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.159411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.159449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.159724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.159758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.159953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.159984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.160131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.160163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.160475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.160506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.160771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.160804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.160935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.160967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.161239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.161271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.161472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.161503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.161710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.161743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.161867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.161899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.162023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.162054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.162328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.162360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.162565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.162598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.162769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.162802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.162946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.162978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.090 [2024-10-14 17:48:11.163166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.090 [2024-10-14 17:48:11.163198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.090 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.163423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.163456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.163685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.163720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.163974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.164007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.164241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.164273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.164406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.164438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.164743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.164777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.165000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.165032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.165173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.165205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.165471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.165504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.165792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.165826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.165969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.166001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.166204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.166236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.166531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.166563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.166743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.166775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.166978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.167012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.167266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.167298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.167614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.167646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.167775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.167807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.168033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.168065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.168333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.168365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.168686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.168719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.168937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.168970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.169125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.169156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.169342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.169377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.169652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.169684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.169882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.169911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.170032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.170061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.091 [2024-10-14 17:48:11.170364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.091 [2024-10-14 17:48:11.170395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.091 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.170651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.170682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.170831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.170860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.171054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.171084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.171293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.171323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.171525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.171555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.171720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.171750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.171908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.171938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.172205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.172236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.172503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.172533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.172762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.172794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.173045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.173076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.173233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.173263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.173483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.173514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.173847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.173880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.174176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.174207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-10-14 17:48:11.174331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-10-14 17:48:11.174360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.174637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.174669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.174872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.174902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.175156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.175188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.175405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.175438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.175633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.175667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.175863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.175894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.176107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.176139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.176350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.176381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.176628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.176662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.176867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.176899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.177105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.177137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.177363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.177396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.177671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.177705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.177955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.177988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.178190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.178221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.178482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.178514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.178778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.178811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.179115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.179147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.179278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.179310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.179492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.179535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.179747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.179780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.179980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.180012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.180241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.180274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.180464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.180495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.180775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.180810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.180949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.180980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.181167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.181197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.181326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.181357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.181500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.181530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.181742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.181775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.181932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.181963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.182243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.182275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.182474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.182506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.182706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.182741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.183015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.183046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.183251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.183282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.183488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.183519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-10-14 17:48:11.183728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-10-14 17:48:11.183761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.183979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.184010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.184203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.184234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.184348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.184381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.184620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.184653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.184927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.184959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.185083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.185115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.185452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.185482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.185740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.185774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.186036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.186068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.186204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.186236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.186393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.186424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.186725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.186760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.187043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.187075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.187279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.187312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.187584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.187622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.187853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.187884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.188069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.188101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.188299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.188331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.188512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.188544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.188742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.188776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.189031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.189063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.189443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.189487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.189771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.189805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.190002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.190034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.190331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.190364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.190549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.190581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.190814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.190846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.190980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.191013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.191225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.191256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.191448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.191480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.191745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.191779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.192076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.192109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.192371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.192403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.192686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.192719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.192999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.193030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.193341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.193373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.193567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.193599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-10-14 17:48:11.193897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-10-14 17:48:11.193929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.194084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.194116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.194394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.194428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.194681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.194714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.194908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.194940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.195139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.195171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.195444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.195477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.195711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.195745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.195930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.195962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.196164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.196196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.196428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.196461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.196657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.196691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.196823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.196853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.197053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.197084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.197378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.197410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.197676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.197709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.197853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.197884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.198955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.198987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.199192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.199223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.199344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.199381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.199612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.199646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.199852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.199883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.200035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.200065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.200267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.200298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.200572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.200614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.200759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.200789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.200988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.201019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.201243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.201275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.201564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.201595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.201822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.201854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.202062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.202094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.202393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.202425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.202632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.202667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.202888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.202921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.203125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-10-14 17:48:11.203156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-10-14 17:48:11.203423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.203455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.203649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.203683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.203877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.203908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.204106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.204139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.204403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.204435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.204641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.204674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.204865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.204898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.205118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.205150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.205349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.205381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.205523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.205555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.205798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.205831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.205987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.206020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.206229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.206261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.206513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.206545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.206860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.206894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.207163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.207195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.207400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.207431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.207729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.207765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.208033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.208066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.208363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.208395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.208687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.208720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.208875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.208907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.209055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.209087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.209286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.209318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.209517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.209554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.209758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.209790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.210040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.210072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.210306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.210339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.210622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.210655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.210795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.210827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.210967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.210998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.211180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.211211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.211488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.211519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.211652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.211686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.211903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.211935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.212087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.212120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.212349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.212381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-10-14 17:48:11.212585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-10-14 17:48:11.212624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.212822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.212854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.213054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.213086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.213417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.213449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.213655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.213688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.213830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.213861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.214084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.214117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.214327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.214359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.214654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.214688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.214894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.214926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.215069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.215101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.215352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.215383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.215680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.215714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.215917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.215948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.216318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.216396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.216710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.216752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.217049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.217084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.217416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.217450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.217743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.217779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.217987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.218019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.218305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.218337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.218569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.218611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.218805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.218841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.219100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.219133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.219257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.219290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.219469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.219503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.219729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.219764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.220022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.220056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.220403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.220437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.220654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.220687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.220891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.220923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.221064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.221095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.221335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.221367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.221625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.221659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.221805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.221836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.222035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.222067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.222386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.222418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.222598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-10-14 17:48:11.222644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-10-14 17:48:11.222799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.222830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.223034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.223065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.223211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.223242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.223517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.223556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.223751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.223784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.224082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.224114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.224311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.224344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.224547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.224580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.224726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.224759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.225013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.225045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.225326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.225358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.225561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.225594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.225810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.225842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.226095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.226128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.226434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.226467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.226668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.226703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.226927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.226958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.227228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.227261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.227522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.227555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.227818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.227851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.228147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.228179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.228443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.228476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.228661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.228695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.228944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.228976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.229131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.229164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.229369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.229401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.229523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.229554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.229706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.229737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.229943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.229975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.230176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.230208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.230460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.230500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.230721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.230754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.230952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.230983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.231277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.231309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.231533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.231564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.231792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.231825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.232023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.232055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.232333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.232365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.232614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.232647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.232899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.232930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-10-14 17:48:11.233242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-10-14 17:48:11.233274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.233525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.233565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.233724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.233757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.233976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.234015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.234325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.234362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.234587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.234651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.234887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.234919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.235162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.235201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.235458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.235490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.235785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.235818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.236113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.236145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.236440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.236474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.236673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.236707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.236987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.237019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.237216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.237247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.237505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.237537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.237817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.237850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.238067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.238100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.238351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.238385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.238584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.238625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.238901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.238934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.239224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.239256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.239532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.239565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.239869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.239905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.240171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.240204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.240497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.240530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.240779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.240813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.241032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.241063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.241323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.241354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.241623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.241656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.241929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.241961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.242305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.242380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.242620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.242658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.242867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.242901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.243113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.243146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.243283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.243315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.243564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.243597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.243875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.243907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.244128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.244160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.244298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.244330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.244626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.244659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.376 [2024-10-14 17:48:11.244954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.376 [2024-10-14 17:48:11.244987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.376 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.245257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.245288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.245429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.245461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.245733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.245776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.245909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.245939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.246215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.246247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.246443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.246475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.246675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.246709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.246975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.247007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.247143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.247174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.247447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.247478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.247630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.247662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.247867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.247898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.248055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.248086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.248297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.248330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.248610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.248642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.248897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.248927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.249165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.249196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.249396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.249428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.249652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.249684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.249820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.249851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.250083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.250115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.250348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.250380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.250633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.250666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.250936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.250968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.251174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.251205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.251450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.251482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.251633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.251667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.251871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.251903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.252156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.252188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.252451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.252526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.252926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.253004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.253329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.253366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.253569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.253615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.253830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.253863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.254121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.254155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.254408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.254440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.254573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.254619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.254833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.254870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.255075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.255107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.255251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.377 [2024-10-14 17:48:11.255283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.377 qpair failed and we were unable to recover it. 00:31:12.377 [2024-10-14 17:48:11.255472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.255504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.255697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.255732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.255986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.256029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.256249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.256282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.256535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.256566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.256735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.256768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.256923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.256955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.257090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.257122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.257304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.257336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.257589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.257630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.257832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.257864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.258060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.258093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.258325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.258358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.258560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.258593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.258807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.258839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.259036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.259068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.259384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.259417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.259746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.259781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.259918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.259949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.260153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.260185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.260406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.260439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.260634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.260667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.260921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.260953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.261146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.261179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.261453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.261485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.261756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.261789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.261938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.261970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.262300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.262333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.262610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.262644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.262837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.262914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.263198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.263235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.263449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.263483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.263759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.263795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.378 qpair failed and we were unable to recover it. 00:31:12.378 [2024-10-14 17:48:11.263989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.378 [2024-10-14 17:48:11.264022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.264221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.264254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.264447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.264480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.264745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.264779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.264958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.264990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.265186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.265218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.265489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.265521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.265729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.265762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.265946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.265978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.266139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.266181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.266360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.266394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.266622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.266656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.266919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.266952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.267236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.267267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.267471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.267503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.267764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.267798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.268013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.268045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.268195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.268227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.268422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.268454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.268666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.268699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.268929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.268961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.269103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.269135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.269339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.269372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.269613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.269647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.269848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.269881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.270079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.270111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.270324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.270357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.270661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.270695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.270958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.270990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.271288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.271322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.271532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.271564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.271839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.271872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.272082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.272114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.272466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.272497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.272779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.272814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.272960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.272993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.273366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.273398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.273685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.273719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.273948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.273980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.274186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.274217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.274447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.379 [2024-10-14 17:48:11.274479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.379 qpair failed and we were unable to recover it. 00:31:12.379 [2024-10-14 17:48:11.274679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.274713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.275000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.275033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.275357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.275390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.275713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.275747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.275886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.275917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.276192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.276224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.276425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.276456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.276639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.276673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.276816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.276846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.277062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.277095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.277374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.277406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.277540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.277571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.277786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.277819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.278005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.278037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.278268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.278300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.278554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.278587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.278805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.278838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.279043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.279076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.279275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.279306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.279533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.279565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.279774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.279807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.279945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.279977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.280185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.280216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.280481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.280515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.280793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.280827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.281111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.281142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.281377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.281409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.281692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.281726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.281855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.281887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.282032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.282063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.282213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.282245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.282460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.282492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.282692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.282726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.282911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.282943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.283151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.283183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.283457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.283494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.283712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.283746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.283952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.283984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.284123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.284156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.284365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.284397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.380 qpair failed and we were unable to recover it. 00:31:12.380 [2024-10-14 17:48:11.284642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.380 [2024-10-14 17:48:11.284676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.284872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.284904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.285130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.285162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.285502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.285533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.285743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.285775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.285970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.286002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.286295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.286327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.286598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.286642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.286840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.286872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.287138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.287172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.287495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.287527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.287781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.287815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.288105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.288136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.288278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.288311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.288623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.288657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.288804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.288836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.289056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.289089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.289394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.289426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.289617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.289650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.289856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.289888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.290044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.290076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.290215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.290246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.290501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.290533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.290763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.290796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.290995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.291026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.291245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.291276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.291469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.291501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.291775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.291809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.291920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.291952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.292223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.292256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.292543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.292574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.292762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.292795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.293015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.293045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.293183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.293215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.293427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.293459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.293687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.293725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.293920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.293950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.294070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.294100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.294374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.294406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.294699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.294732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.294940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.381 [2024-10-14 17:48:11.294972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.381 qpair failed and we were unable to recover it. 00:31:12.381 [2024-10-14 17:48:11.295270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.295302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.295490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.295522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.295714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.295748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.295980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.296011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.296214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.296245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.296588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.296630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.296848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.296880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.297144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.297176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.297401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.297432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.297632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.297665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.297857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.297889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.298023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.298055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.298291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.298323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.298505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.298535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.298728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.298760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.299013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.299043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.299329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.299361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.299621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.299654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.299802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.299833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.300040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.300071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.300320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.300351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.300506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.300537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.300815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.300850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.300989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.301020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.301224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.301257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.301515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.301546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.301838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.301872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.302148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.302179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.302465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.302497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.302771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.302804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.303038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.303069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.303207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.303238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.303547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.303579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.303791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.303822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.303967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.304005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.382 [2024-10-14 17:48:11.304206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.382 [2024-10-14 17:48:11.304237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.382 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.304520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.304552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.304757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.304790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.304931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.304963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.305085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.305115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.305341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.305372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.305576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.305613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.305758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.305790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.305972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.306004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.306141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.306172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.306367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.306399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.306674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.306709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.306895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.306927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.307075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.307105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.307388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.307420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.307675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.307708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.307964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.307995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.308136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.308168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.308420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.308452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.308713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.308747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.308900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.308931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.309060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.309091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.309222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.309254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.309504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.309536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.309828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.309862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.310059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.310090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.310249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.310281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.310474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.310507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.310714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.310747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.310941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.310973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.311163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.311193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.311439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.311470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.311761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.311795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.311930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.311960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.312097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.312128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.312381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.312412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.312552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.312584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.312761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.312793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.313008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.313041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.313194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.313230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.313420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.383 [2024-10-14 17:48:11.313450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.383 qpair failed and we were unable to recover it. 00:31:12.383 [2024-10-14 17:48:11.313661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.313695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.313853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.313885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.314091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.314122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.314336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.314368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.314574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.314612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.314797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.314829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.315053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.315084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.315213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.315243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.315508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.315542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.315744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.315776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.315982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.316013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.316293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.316325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.316463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.316494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.316756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.316790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.316945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.316976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.317156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.317188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.317384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.317416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.317696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.317729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.317932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.317964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.318169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.318201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.318379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.318411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.318689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.318723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.318942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.318974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.319128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.319159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.319458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.319490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.319774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.319808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.319943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.319973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.320153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.320185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.320447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.320480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.320758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.320792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.320936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.320967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.321106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.321137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.321452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.321485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.321744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.321778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.322008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.322039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.322308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.322340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.322534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.322566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.322722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.322756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.322985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.323022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.323144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.323175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.384 qpair failed and we were unable to recover it. 00:31:12.384 [2024-10-14 17:48:11.323466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.384 [2024-10-14 17:48:11.323498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.323749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.323782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.324047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.324078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.324327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.324359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.324578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.324615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.324825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.324858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.325131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.325162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.325454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.325486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.325752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.325786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.326041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.326071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.326373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.326405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.326682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.326716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.326935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.326967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.327200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.327232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.327514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.327545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.327834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.327867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.328000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.328031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.328304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.328336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.328623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.328657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.328932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.328965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.329114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.329145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.329339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.329371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.329643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.329677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.329928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.329962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.330116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.330148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.330334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.330366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.330623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.330658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.330952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.330984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.331262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.331294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.331614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.331647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.331891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.331923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.332120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.332152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.332345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.332377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.332649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.332684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.332961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.332992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.333272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.333304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.333496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.333529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.333804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.333837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.334091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.334128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.334332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.334364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.385 [2024-10-14 17:48:11.334634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.385 [2024-10-14 17:48:11.334668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.385 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.334961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.334995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.335145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.335176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.335464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.335495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.335639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.335673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.335805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.335834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.335986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.336016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.336298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.336331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.336581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.336624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.336774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.336806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.336960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.336992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.337138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.337169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.337303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.337335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.337517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.337549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.337750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.337784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.338001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.338033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.338247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.338278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.338540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.338572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.338784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.338817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.339018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.339050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.339247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.339278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.339538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.339571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.339755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.339786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.340003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.340035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.340255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.340286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.340492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.340523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.340736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.340769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.340967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.340997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.341178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.341211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.341516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.341546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.341739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.341771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.341978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.342009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.342274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.342305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.342435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.342468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.342687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.342720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.342921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.342952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.343231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.386 [2024-10-14 17:48:11.343262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.386 qpair failed and we were unable to recover it. 00:31:12.386 [2024-10-14 17:48:11.343533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.343564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.343750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.343789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.344083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.344117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.344269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.344301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.344444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.344476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.344752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.344786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.345061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.345092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.345348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.345379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.345646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.345680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.345863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.345895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.346028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.346060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.346260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.346293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.346472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.346502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.346812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.346846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.347003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.347033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.347267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.347298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.347564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.347595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.347758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.347789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.347972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.348003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.348304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.348336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.348621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.348655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.348859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.348890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.349095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.349128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.349458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.349490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.349715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.349747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.350049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.350080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.350223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.350254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.350454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.350485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.350711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.350745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.350944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.350975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.351343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.351374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.351689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.351722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.351976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.352006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.352267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.352300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.352578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.352617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.352803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.352835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.353047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.353079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.353306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.353337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.353538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.353569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.353852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.353886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.387 qpair failed and we were unable to recover it. 00:31:12.387 [2024-10-14 17:48:11.354093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.387 [2024-10-14 17:48:11.354124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.354396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.354433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.354686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.354721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.354930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.354960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.355157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.355190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.355417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.355449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.355674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.355706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.355979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.356010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.356160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.356192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.356469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.356500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.356690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.356724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.356925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.356956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.357157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.357190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.357417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.357449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.357706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.357739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.358000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.358033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.358183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.358213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.358422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.358453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.358677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.358711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.358963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.358995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.359139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.359169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.359373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.359403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.359713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.359746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.359901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.359932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.360086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.360116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.360449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.360482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.360690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.360723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.360866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.360898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.361264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.361342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.361569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.361617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.361812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.361845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.362041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.362074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.362381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.362413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.362543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.362575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.362774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.362807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.362954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.362985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.363193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.363226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.363421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.363452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.363770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.363803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.363959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.363991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.388 [2024-10-14 17:48:11.364138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.388 [2024-10-14 17:48:11.364170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.388 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.364419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.364462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.364622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.364656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.364796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.364827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.364979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.365011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.365207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.365238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.365535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.365567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.365753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.365787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.365913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.365946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.366101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.366133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.366415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.366448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.366631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.366665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.366867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.366900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.367045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.367077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.367280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.367313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.367613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.367647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.367853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.367885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.368086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.368119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.368470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.368502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.368770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.368803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.368999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.369031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.369182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.369214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.369464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.369496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.369694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.369728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.369935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.369967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.370111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.370143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.370377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.370410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.370691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.370724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.370860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.370893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.371078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.371110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.371383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.371415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.371620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.371654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.371786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.371817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.372017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.372050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.372292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.372324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.372520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.372552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.372820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.372854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.373057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.373088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.373398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.373431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.373649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.373683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.373877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.389 [2024-10-14 17:48:11.373908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.389 qpair failed and we were unable to recover it. 00:31:12.389 [2024-10-14 17:48:11.374108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.374147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.374382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.374415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.374561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.374593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.374799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.374832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.375083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.375115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.375389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.375422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.375626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.375660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.375872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.375905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.376058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.376091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.376232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.376264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.376403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.376436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.376643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.376676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.376892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.376924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.377069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.377101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.377239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.377271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.377487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.377520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.377814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.377848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.378049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.378081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.378366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.378400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.378673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.378707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.378898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.378930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.379064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.379096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.379347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.379379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.379631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.379665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.379797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.379828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.380031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.380063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.380266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.380299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.380506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.380539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.380835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.380868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.381072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.381104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.381373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.381405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.381606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.381640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.381823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.381855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.382060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.382092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.382239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.382270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.382549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.382580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.382771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.382801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.382922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.382954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.383105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.383138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.383264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.383297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.383483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.390 [2024-10-14 17:48:11.383527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.390 qpair failed and we were unable to recover it. 00:31:12.390 [2024-10-14 17:48:11.383791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.383826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.384113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.384146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.384353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.384385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.384649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.384683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.384984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.385016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.385213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.385246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.385376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.385408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.385612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.385646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.385851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.385884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.386183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.386215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.386490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.386522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.386807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.386841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.386975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.387007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.387316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.387350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.387541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.387573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.387725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.387757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.387967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.388000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.388377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.388410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.388663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.388697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.388890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.388922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.389117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.389150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.389348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.389381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.389582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.389624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.389880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.389912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.390122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.390155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.390356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.390389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.390610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.390643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.390847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.390878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.391039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.391071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.391379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.391410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.391592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.391635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.391766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.391798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.392008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.392039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.392324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.392356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.392620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.392653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.391 qpair failed and we were unable to recover it. 00:31:12.391 [2024-10-14 17:48:11.392810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.391 [2024-10-14 17:48:11.392841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.393058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.393090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.393344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.393376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.393581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.393636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.393838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.393877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.394023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.394055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.394346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.394380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.394655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.394689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.394839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.394874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.394999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.395030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.395284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.395316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.395467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.395499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.395800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.395833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.395979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.396011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.396157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.396190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.396465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.396498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.396653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.396687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.396872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.396905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.397103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.397136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.397427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.397459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.397666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.397700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.397972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.398004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.398201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.398233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.398435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.398468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.398660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.398694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.398970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.399002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.399321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.399352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.399615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.399648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.399928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.399961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.400166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.400198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.400379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.400412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.400639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.400674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.400821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.400852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.400984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.401016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.401273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.401306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.401428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.401460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.401670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.401704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.401943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.401976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.402176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.402210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.402521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.402553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.392 [2024-10-14 17:48:11.402783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.392 [2024-10-14 17:48:11.402816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.392 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.403089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.403122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.403396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.403429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.403700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.403733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.403932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.403965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.404121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.404154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.404447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.404480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.404687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.404720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.404925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.404958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.405211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.405243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.405406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.405439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.405748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.405781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.405971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.406004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.406216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.406248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.406465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.406498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.406715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.406749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.406899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.406932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.407138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.407171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.407386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.407419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.407645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.407679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.407818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.407852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.408058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.408090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.408353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.408385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.408678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.408711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.408986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.409018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.409210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.409243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.409458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.409490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.409623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.409657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.409807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.409840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.410054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.410087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.410344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.410376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.410570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.410632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.410774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.410807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.411051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.411083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.411392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.411425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.411640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.411674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.411878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.411911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.412187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.412219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.412507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.412540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.412773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.412806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.413005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.413036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.393 qpair failed and we were unable to recover it. 00:31:12.393 [2024-10-14 17:48:11.413318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.393 [2024-10-14 17:48:11.413351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.413615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.413649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.413846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.413878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.414105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.414136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.414400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.414434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.414726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.414760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.415028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.415060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.415316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.415349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.415649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.415683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.415833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.415865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.416067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.416099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.416373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.416405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.416743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.416776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.416974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.417007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.417284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.417317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.417556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.417588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.417870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.417903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.418114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.418147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.418424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.418456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.418683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.418717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.418981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.419014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.419212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.419244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.419514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.419546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.419754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.419788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.420029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.420061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.420254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.420286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.420507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.420539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.420747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.420781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.420925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.420958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.421176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.421209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.421515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.421553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.421871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.421905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.422101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.422134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.422388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.422420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.422631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.422664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.422918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.422950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.423242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.423273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.423548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.423581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.423851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.423884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.424162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.424194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.394 qpair failed and we were unable to recover it. 00:31:12.394 [2024-10-14 17:48:11.424480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.394 [2024-10-14 17:48:11.424512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.424841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.424874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.425133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.425166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.425309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.425341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.425636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.425671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.425924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.425957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.426097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.426130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.426417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.426450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.426725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.426759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.426912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.426945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.427092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.427125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.427420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.427458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.427658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.427692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.427824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.427856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.428075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.428109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.428334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.428368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.428562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.428595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.428814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.428848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.429102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.429135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.429409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.429442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.429646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.429681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.429866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.429898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.430176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.430209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.430509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.430542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.430831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.430864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.431060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.431092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.431305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.431338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.431590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.431630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.431859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.431892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.432094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.432126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.432344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.432383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.432688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.432723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.432886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.432919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.433150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.433181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.433464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.433496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.433701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.433736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.395 [2024-10-14 17:48:11.433877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.395 [2024-10-14 17:48:11.433909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.395 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.434116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.434148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.434477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.434510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.434743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.434776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.435048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.435080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.435287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.435319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.435568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.435610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.435758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.435790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.435992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.436023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.436233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.436264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.436488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.436518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.436797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.436829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.437110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.437140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.437338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.437369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.437569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.437599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.437784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.437814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.438021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.438051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.438303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.438333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.438588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.438627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.438882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.438912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.439064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.439093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.439360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.439391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.439677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.439709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.439933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.439963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.440163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.440194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.440443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.440474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.440754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.440786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.441062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.441093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.441406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.441436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.441639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.441674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.441870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.441902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.442176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.442208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.442472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.442505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.442708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.442741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.442943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.442982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.443179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.443211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.443486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.443518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.443743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.443777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.443905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.443937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.444168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.444201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.444500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.396 [2024-10-14 17:48:11.444532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.396 qpair failed and we were unable to recover it. 00:31:12.396 [2024-10-14 17:48:11.444802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.444837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.445119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.445152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.445431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.445463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.445649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.445682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.445883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.445915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.446116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.446148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.446355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.446388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.446694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.446728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.446987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.447019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.447298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.447331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.447543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.447576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.447718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.447750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.447949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.447981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.448127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.448160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.448438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.448469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.448668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.448702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.448910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.448942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.449076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.449107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.449385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.449417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.449613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.449646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.449833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.449865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.450119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.450150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.450372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.450405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.450585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.450627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.450826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.450858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.451019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.451051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.451329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.451362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.451544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.451576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.451847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.451883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.452087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.452118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.452381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.452413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.452670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.452703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.452902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.452934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.453212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.453250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.453453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.453484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.453741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.453775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.453921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.453953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.454159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.454191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.454467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.454499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.397 [2024-10-14 17:48:11.454772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.397 [2024-10-14 17:48:11.454806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.397 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.454922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.454954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.455174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.455206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.455405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.455438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.455693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.455727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.456007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.456040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.456258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.456290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.456570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.456628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.456888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.456920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.457205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.457237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.457516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.457548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.457805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.457839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.458140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.458172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.458419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.458451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.458775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.458809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.459060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.459092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.459353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.459384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.459682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.459715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.459998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.460030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.460285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.460317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.460639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.460673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.460872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.460904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.461022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.461054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.461238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.461269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.461505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.461537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.461759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.461792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.461918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.461950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.462165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.462197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.462330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.462362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.462678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.462712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.462907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.462940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.463191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.463223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.463476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.463507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.463690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.463724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.463946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.463983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.464289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.464321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.464577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.464618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.464922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.464954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.465201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.465233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.465496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.465528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.465829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.398 [2024-10-14 17:48:11.465862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.398 qpair failed and we were unable to recover it. 00:31:12.398 [2024-10-14 17:48:11.466128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.466160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.466344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.466375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.466654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.466688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.466876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.466908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.467180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.467212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.467395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.467426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.467633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.467666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.467930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.467961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.468203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.468235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.468513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.468545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.468836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.468869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.469012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.469043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.469226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.469258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.469471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.469502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.469691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.469724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.469977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.470009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.470210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.470242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.470519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.470551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.470763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.470797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.470996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.471028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.471233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.471265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.471553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.471585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.471869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.471902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.472184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.472216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.472424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.472456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.472707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.472740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.472964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.472996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.473131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.473163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.473425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.473457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.473656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.473690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.473946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.473978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.474201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.474232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.474470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.474502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.474806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.474845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.475139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.475170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.399 [2024-10-14 17:48:11.475466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.399 [2024-10-14 17:48:11.475498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.399 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.475679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.475713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.475988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.476019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.476290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.476322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.476634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.476668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.476927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.476960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.477240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.477272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.477557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.477590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.477876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.477909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.478126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.478159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.478415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.478446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.478711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.478744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.479045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.479078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.479345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.479377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.479675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.479708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.479937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.479969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.480176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.480208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.480483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.480515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.480794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.480827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.481087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.481118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.481301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.481332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.481555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.481586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.481813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.481846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.482100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.482133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.482313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.482344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.482530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.482563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.482828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.482862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.483142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.483174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.483449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.483481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.483747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.483781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.484051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.484083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.484377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.484409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.484702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.484734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.484926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.484958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.485228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.485260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.485481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.485512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.400 qpair failed and we were unable to recover it. 00:31:12.400 [2024-10-14 17:48:11.485768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.400 [2024-10-14 17:48:11.485802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.486006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.486038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.486309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.486348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.486626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.486660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.486946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.486980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.487204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.487236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.487487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.487522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.487790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.487824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.488107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.488140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.488361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.488395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.488597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.488643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.488949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.488982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.489197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.489233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.489500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.489533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.489740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.489774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.490036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.490069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.490383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.490416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.490669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.490702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.490936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.490968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.401 [2024-10-14 17:48:11.491219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.401 [2024-10-14 17:48:11.491252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.401 qpair failed and we were unable to recover it. 00:31:12.677 [2024-10-14 17:48:11.491509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.491542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.491733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.491767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.492044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.492076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.492332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.492365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.492575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.492616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.492895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.492927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.493123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.493154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.493413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.493444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.493750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.493784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.494047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.494079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.494366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.494398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.494626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.494660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.494936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.494968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.495193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.495225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.495524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.495556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.495803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.495836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.496050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.496082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.496262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.496294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.496565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.496597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.496792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.496824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.497090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.497123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.497406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.497437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.497678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.497718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.498029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.498061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.498360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.498395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.498597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.498650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.498953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.498986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.499265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.499298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.499437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.499469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.499750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.499784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.500069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.500102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.500215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.500246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.500392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.500424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.500676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.500710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.500963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.500996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.501299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.501331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.501595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.501641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.501895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.501927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.502218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.502250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.678 [2024-10-14 17:48:11.502483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.678 [2024-10-14 17:48:11.502516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.678 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.502798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.502832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.502994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.503027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.503174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.503207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.503334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.503366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.503639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.503678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.503880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.503916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.504032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.504064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.504285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.504317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.504626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.504661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.504949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.504985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.505128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.505159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.505457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.505490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.505683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.505717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.505925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.505958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.506163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.506196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.506325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.506358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.506556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.506590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.506759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.506792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.506990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.507021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.507225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.507257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.507531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.507564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.507785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.507818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.508072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.508111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.508256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.508289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.508588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.508633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.508838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.508871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.509051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.509083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.509427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.509459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.509728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.509763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.509971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.510003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.510187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.510219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.510435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.510468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.510679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.510712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.510915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.510947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.511137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.511169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.511434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.511466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.511668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.511702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.511976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.512008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.512208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.512241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.679 [2024-10-14 17:48:11.512523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.679 [2024-10-14 17:48:11.512555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.679 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.512824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.512858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.513006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.513037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.513338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.513370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.513574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.513617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.513832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.513864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.514095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.514126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.514249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.514281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.514548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.514579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.514879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.514912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.515118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.515150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.515299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.515330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.515522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.515554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.515813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.515847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.516028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.516060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.516254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.516286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.516478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.516510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.516732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.516765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.517046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.517078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.517388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.517420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.517677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.517710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.517933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.517965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.518173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.518205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.518398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.518435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.518666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.518699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.518976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.519008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.519212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.519245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.519497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.519528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.519722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.519755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.519989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.520021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.520327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.520358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.520623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.520657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.520861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.520894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.521035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.521067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.521368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.521399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.521644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.521678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.521882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.521914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.522223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.522256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.522535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.522567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.522859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.680 [2024-10-14 17:48:11.522892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.680 qpair failed and we were unable to recover it. 00:31:12.680 [2024-10-14 17:48:11.523168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.523200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.523486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.523518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.523803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.523837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.524124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.524156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.524444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.524476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.524723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.524757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.525010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.525041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.525254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.525285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.525501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.525533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.525716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.525749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.525962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.525994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.526201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.526234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.526503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.526535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.526833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.526867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.527136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.527168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.527491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.527522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.527784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.527818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.528046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.528078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.528257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.528289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.528562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.528594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.528826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.528859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.529042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.529073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.529292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.529324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.529611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.529651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.529924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.529956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.530101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.530133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.530454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.530486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.530739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.530772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.531023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.531055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.531209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.531240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.531517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.531549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.531860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.531894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.532169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.532201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.532394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.532425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.532622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.532656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.532807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.532839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.533057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.533089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.681 [2024-10-14 17:48:11.533382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.681 [2024-10-14 17:48:11.533415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.681 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.533560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.533591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.533783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.533816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.534000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.534032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.534286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.534317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.534570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.534613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.534739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.534771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.535048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.535081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.535354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.535386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.535542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.535573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.535849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.535882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.536038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.536069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.536353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.536385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.536644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.536679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.536883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.536914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.537189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.537220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.537361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.537392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.537671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.537705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.537888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.537919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.538101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.538133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.538418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.538450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.538716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.538748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.539051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.539083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.539279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.539312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.539585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.539627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.539910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.539942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.540244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.540275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.540506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.540538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.540858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.540891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.541160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.541192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.541467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.541499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.541709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.541743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.541994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.542026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.542330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.542361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.682 [2024-10-14 17:48:11.542643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.682 [2024-10-14 17:48:11.542677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.682 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.542884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.542916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.543194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.543225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.543416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.543449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.543737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.543770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.543972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.544004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.544197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.544229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.544444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.544476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.544745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.544779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.545006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.545038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.545291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.545323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.545518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.545550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.545833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.545866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.546101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.546133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.546383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.546414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.546648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.546681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.546956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.546988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.547193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.547226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.547373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.547405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.547717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.547757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.547968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.548000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.548145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.548177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.548360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.548392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.548680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.548714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.548938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.548971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.549224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.549256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.549469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.549501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.549701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.549735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.550010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.550042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.550223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.550255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.550509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.550541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.550841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.550875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.551080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.551113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.551267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.551299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.551616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.551649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.551925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.551957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.552150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.552182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.552445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.552477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.552777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.552811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.553088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.553120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.553333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.683 [2024-10-14 17:48:11.553365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.683 qpair failed and we were unable to recover it. 00:31:12.683 [2024-10-14 17:48:11.553667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.553700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.553989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.554021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.554246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.554278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.554498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.554530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.554780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.554813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.555023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.555055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.555323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.555355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.555655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.555689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.555873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.555905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.556185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.556217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.556484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.556516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.556780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.556814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.557113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.557145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.557359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.557391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.557616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.557650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.557928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.557960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.558212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.558244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.558513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.558545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.558763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.558801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.559104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.559135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.559398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.559431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.559621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.559654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.559935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.559968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.560108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.560141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.560393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.560425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.560632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.560666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.560937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.560969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.561171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.561203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.561399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.561431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.561704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.561737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.562020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.562052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.562281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.562313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.562623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.562657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.562946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.562978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.563169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.563201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.563389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.563421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.563675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.563708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.563962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.563994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.564189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.564220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.564420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.684 [2024-10-14 17:48:11.564451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.684 qpair failed and we were unable to recover it. 00:31:12.684 [2024-10-14 17:48:11.564726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.564761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.565043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.565074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.565332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.565364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.565671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.565705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.565951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.565983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.566174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.566206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.566504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.566536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.566832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.566866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.567140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.567171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.567434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.567466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.567665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.567699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.567983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.568014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.568259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.568291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.568557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.568589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.568798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.568832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.569116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.569148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.569447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.569478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.569685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.569719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.569996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.570035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.570236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.570267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.570473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.570505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.570759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.570792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.571068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.571100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.571380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.571412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.571695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.571728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.571934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.571966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.572170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.572202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.572459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.572491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.572790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.572823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.573091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.573123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.573378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.573409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.573661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.573694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.573952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.573984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.574182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.574214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.574488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.574520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.574775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.574808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.574935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.574967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.575149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.575181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.575359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.575390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.575675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.575709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.685 [2024-10-14 17:48:11.575976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.685 [2024-10-14 17:48:11.576007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.685 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.576276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.576309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.576613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.576647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.576831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.576862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.577128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.577160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.577309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.577342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.577640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.577673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.577893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.577925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.578117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.578149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.578401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.578433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.578685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.578718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.578970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.579003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.579305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.579336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.579613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.579646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.579919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.579951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.580205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.580236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.580442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.580474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.580733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.580765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.581020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.581057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.581360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.581391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.581590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.581634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.581913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.581945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.582244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.582275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.582489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.582520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.582799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.582833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.583048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.583079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.583220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.583252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.583503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.583535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.583839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.583873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.584000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.584031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.584308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.584339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.584591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.584633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.584942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.584975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.585246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.585277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.585567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.585599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.686 qpair failed and we were unable to recover it. 00:31:12.686 [2024-10-14 17:48:11.585883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.686 [2024-10-14 17:48:11.585916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.586196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.586228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.586483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.586515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.586710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.586744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.587023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.587055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.587338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.587370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.587655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.587687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.587971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.588003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.588197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.588229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.588503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.588535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.588859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.588893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.589088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.589119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.589398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.589429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.589682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.589716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.589850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.589881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.590158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.590190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.590390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.590421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.590631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.590665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.590937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.590970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.591190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.591222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.591501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.591533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.591790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.591822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.592122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.592154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.592350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.592387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.592645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.592678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.592961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.592993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.593194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.593225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.593443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.593475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.593747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.593780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.594070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.594102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.594389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.594421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.594699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.594734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.594917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.594949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.595222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.595253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.595527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.595560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.595855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.595888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.596107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.596139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.596336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.596367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.596587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.596627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.596929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.596961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.597168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.687 [2024-10-14 17:48:11.597200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.687 qpair failed and we were unable to recover it. 00:31:12.687 [2024-10-14 17:48:11.597417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.597448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.597630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.597663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.597927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.597959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.598161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.598193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.598468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.598500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.598780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.598813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.599020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.599052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.599351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.599383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.599657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.599690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.599906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.599938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.600129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.600161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.600441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.600473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.600759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.600792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.601075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.601107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.601403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.601435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.601708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.601742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.601957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.601989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.602262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.602293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.602587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.602631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.602843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.602876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.603129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.603161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.603426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.603457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.603764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.603804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.604064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.604096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.604399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.604431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.604696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.604730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.605025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.605058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.605334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.605366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.605547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.605578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.605815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.605848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.606099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.606131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.606392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.606423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.606729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.606763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.606959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.606991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.607264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.607295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.607512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.607543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.607768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.607802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.608082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.608114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.608306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.608339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.688 [2024-10-14 17:48:11.608597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.688 [2024-10-14 17:48:11.608641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.688 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.608933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.608964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.609162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.609194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.609377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.609408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.609624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.609658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.609806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.609839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.610118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.610150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.610402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.610434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.610702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.610736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.611033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.611064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.611273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.611305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.611503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.611535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.611809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.611841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.612058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.612090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.612376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.612407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.612676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.612709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.613000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.613031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.613227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.613258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.613531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.613563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.613735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.613768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.613961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.613993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.614249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.614281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.614531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.614563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.614827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.614867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.615147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.615178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.615460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.615493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.615745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.615779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.616051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.616083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.616285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.616318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.616572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.616612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.616808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.616839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.617114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.617145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.617410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.617442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.617739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.617772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.617911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.617943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.618140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.618172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.618369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.618400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.618683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.618717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.618972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.619003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.619153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.619185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.619384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.689 [2024-10-14 17:48:11.619416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.689 qpair failed and we were unable to recover it. 00:31:12.689 [2024-10-14 17:48:11.619715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.619748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.619996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.620028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.620229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.620261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.620447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.620478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.620763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.620796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.621060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.621091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.621390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.621422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.621695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.621728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.621986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.622018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.622321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.622353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.622635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.622668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.622950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.622983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.623245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.623277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.623471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.623503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.623786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.623819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.624094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.624125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.624326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.624358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.624623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.624656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.624953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.624985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.625267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.625298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.625546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.625578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.625872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.625905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.626178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.626218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.626474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.626506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.626807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.626840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.627130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.627162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.627441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.627473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.627762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.627796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.628075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.628107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.628394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.628426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.628652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.628685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.628965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.628997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.629279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.629311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.629591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.629633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.629862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.629894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.630111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.630143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.630274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.630307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.630614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.630648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.630832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.690 [2024-10-14 17:48:11.630864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.690 qpair failed and we were unable to recover it. 00:31:12.690 [2024-10-14 17:48:11.631111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.631143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.631418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.631450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.631699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.631732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.631998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.632030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.632283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.632315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.632569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.632608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.632828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.632860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.633133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.633165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.633439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.633471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.633613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.633645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.633925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.633958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.634151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.634183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.634312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.634344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.634623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.634657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.634939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.634971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.635179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.635214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.635494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.635527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.635733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.635768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.635998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.636030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.636213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.636244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.636516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.636548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.636780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.636813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.637072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.637104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.637364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.637401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.637537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.637569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.637853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.637886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.638190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.638221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.638486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.638518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.638716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.638749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.639021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.639052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.639236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.639268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.639544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.639575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.639869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.639902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.640102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.640134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.640323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.640355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.640535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.640567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.640779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.640813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.691 [2024-10-14 17:48:11.641019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.691 [2024-10-14 17:48:11.641051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.691 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.641325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.641356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.641550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.641582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.641899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.641932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.642238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.642270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.642465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.642496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.642750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.642783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.643062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.643094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.643296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.643328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.643584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.643632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.643852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.643885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.644101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.644132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.644414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.644446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.644644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.644677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.644879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.644911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.645167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.645199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.645472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.645503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.645702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.645735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.645989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.646021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.646291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.646323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.646469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.646501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.646682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.646716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.646903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.646935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.647192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.647224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.647572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.647628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.647913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.647945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.648203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.648241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.648518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.648550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.648851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.648884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.649172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.649205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.649484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.649515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.649716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.649749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.649955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.649987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.650214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.650245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.650530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.650562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.650845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.650878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.651162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.651193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.651386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.651418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.651620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.651654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.651927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.651959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.652180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.692 [2024-10-14 17:48:11.652213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.692 qpair failed and we were unable to recover it. 00:31:12.692 [2024-10-14 17:48:11.652340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.652372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.652696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.652729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.652983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.653015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.653222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.653255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.653533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.653564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.653851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.653885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.654136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.654169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.654417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.654449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.654671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.654704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.654968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.655001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.655297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.655328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.655574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.655626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.655918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.655950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.656196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.656227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.656480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.656512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.656734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.656768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.656952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.656983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.657286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.657317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.657458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.657490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.657762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.657795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.658078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.658111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.658396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.658428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.658631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.658664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.658917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.658949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.659227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.659259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.659446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.659484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.659746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.659779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.660055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.660087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.660375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.660406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.660684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.660718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.661004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.661036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.661173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.661205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.661412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.661444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.661671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.661704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.661904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.661937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.662131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.662163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.662414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.662446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.662675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.662707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.662904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.662935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.663227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.663259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.693 [2024-10-14 17:48:11.663556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.693 [2024-10-14 17:48:11.663588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.693 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.663911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.663944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.664220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.664252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.664466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.664498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.664799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.664834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.665120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.665152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.665403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.665435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.665625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.665658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.665864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.665895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.666170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.666203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.666492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.666524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.666721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.666754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.667118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.667195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.667452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.667489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.667692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.667728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.667927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.667960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.668226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.668257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.668476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.668508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.668723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.668757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.669041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.669074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.669326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.669357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.669618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.669651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.669952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.669984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.670247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.670279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.670478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.670509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.670641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.670684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.670831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.670862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.671138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.671169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.671422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.671453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.671660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.671693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.671967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.671999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.672286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.672318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.672544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.672576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.672867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.672899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.673152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.673184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.673391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.673422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.673695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.673728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.673908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.673939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.674140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.674172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.674437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.674470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.674732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.694 [2024-10-14 17:48:11.674766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.694 qpair failed and we were unable to recover it. 00:31:12.694 [2024-10-14 17:48:11.674907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.674938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.675211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.675243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.675498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.675529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.675733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.675767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.675949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.675981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.676255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.676286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.676513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.676545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.676808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.676841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.677098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.677129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.677382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.677415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.677741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.677775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.678030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.678063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.678373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.678405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.678631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.678664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.678882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.678914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.679186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.679219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.679439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.679471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.679623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.679657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.679936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.679968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.680159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.680190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.680411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.680443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.680716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.680751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.681046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.681078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.681282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.681313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.681570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.681614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.681728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.681759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.682033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.682065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.682277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.682308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.682507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.682539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.682763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.682796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.683099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.683131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.683343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.683375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.683648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.683692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.683882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.683914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.684192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.684223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.684495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.695 [2024-10-14 17:48:11.684527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.695 qpair failed and we were unable to recover it. 00:31:12.695 [2024-10-14 17:48:11.684734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.684769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.685050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.685082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.685367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.685400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.685659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.685693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.685872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.685904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.686156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.686190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.686397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.686429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.686625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.686659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.686804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.686838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.687051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.687083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.687298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.687328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.687628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.687661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.687929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.687962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.688267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.688298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.688561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.688593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.688860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.688898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.689193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.689224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.689436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.689468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.689710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.689743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.690024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.690055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.690305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.690337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.690599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.690638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.690940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.690972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.691164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.691195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.691492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.691524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.691825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.691857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.692157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.692189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.692460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.692491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.692694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.692727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.692856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.692889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.693164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.693196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.693474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.693505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.693717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.693750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.693949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.693979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.694254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.694285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.694481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.694511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.694772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.694806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.695004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.695035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.695312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.695343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.695550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.696 [2024-10-14 17:48:11.695582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.696 qpair failed and we were unable to recover it. 00:31:12.696 [2024-10-14 17:48:11.695785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.695817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.696036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.696068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.696292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.696324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.696575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.696615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.696802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.696833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.697037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.697069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.697358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.697390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.697650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.697685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.697950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.697981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.698126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.698157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.698429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.698460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.698739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.698773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.698972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.699002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.699254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.699286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.699563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.699595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.699912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.699951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.700251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.700283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.700578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.700617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.700879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.700911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.701113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.701145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.701411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.701442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.701724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.701756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.701999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.702031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.702300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.702331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.702526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.702557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.702824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.702858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.703067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.703098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.703363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.703394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.703646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.703680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.703985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.704017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.704279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.704311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.704493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.704524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.704720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.704752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.705028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.705059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.705238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.705269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.705526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.705557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.705836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.705868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.706117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.706149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.706403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.706434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.706715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.706748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.697 [2024-10-14 17:48:11.707085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.697 [2024-10-14 17:48:11.707116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.697 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.707396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.707428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.707740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.707797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.708089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.708120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.708381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.708413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.708619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.708652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.708834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.708866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.709137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.709169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.709366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.709396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.709663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.709696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.709896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.709927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.710188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.710219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.710496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.710528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.710812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.710845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.711131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.711163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.711416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.711454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.711714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.711748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.712047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.712081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.712264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.712295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.712497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.712528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.712806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.712840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.713094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.713126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.713381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.713412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.713640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.713675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.713952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.713984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.714271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.714304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.714578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.714633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.714901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.714934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.715209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.715240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.715523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.715555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.715772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.715806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.716007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.716039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.716314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.716347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.716652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.716686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.716944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.716976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.717233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.717265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.717567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.717628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.717914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.717947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.718228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.718260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.718568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.718609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.718862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.698 [2024-10-14 17:48:11.718895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.698 qpair failed and we were unable to recover it. 00:31:12.698 [2024-10-14 17:48:11.719100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.719132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.719347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.719379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.719684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.719718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.719924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.719956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.720238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.720269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.720558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.720590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.720872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.720904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.721116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.721148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.721426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.721459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.721740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.721774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.722003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.722035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.722233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.722265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.722547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.722579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.722887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.722920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.723180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.723218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.723517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.723549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.723813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.723846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.724100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.724132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.724434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.724466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.724733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.724767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.724990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.725023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.725162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.725194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.725403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.725434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.725712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.725746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.726023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.726056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.726274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.726305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.726489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.726521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.726703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.726737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.726929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.726961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.727236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.727268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.727451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.727483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.727757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.727791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.728071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.728103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.728389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.728421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.728704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.728738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.729021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.699 [2024-10-14 17:48:11.729053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.699 qpair failed and we were unable to recover it. 00:31:12.699 [2024-10-14 17:48:11.729329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.729361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.729644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.729678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.729961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.729993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.730214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.730247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.730525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.730558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.730693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.730727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.730910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.730942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.731219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.731251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.731452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.731483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.731757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.731790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.732040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.732072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.732351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.732384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.732665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.732698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.732912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.732944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.733218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.733250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.733383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.733415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.733632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.733665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.733920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.733953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.734148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.734185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.734411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.734443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.734745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.734779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.735037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.735069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.735321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.735353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.735612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.735645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.735917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.735949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.736227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.736258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.736577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.736620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.736767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.736799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.737070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.737103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.737405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.737438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.737630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.737664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.737892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.737924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.738121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.738154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.738425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.738457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.738657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.738691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.738890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.738923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.739110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.739141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.739325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.739357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.739638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.739672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.739852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.739884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.700 [2024-10-14 17:48:11.740161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.700 [2024-10-14 17:48:11.740194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.700 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.740329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.740360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.740542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.740573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.740835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.740868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.741121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.741153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.741349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.741382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.741631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.741664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.741945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.741978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.742124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.742155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.742450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.742483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.742751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.742785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.743006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.743039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.743232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.743264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.743468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.743500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.743752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.743785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.744051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.744083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.744330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.744362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.744543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.744574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.744806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.744845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.745041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.745074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.745373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.745405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.745673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.745708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.745991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.746023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.746307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.746339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.746623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.746657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.746941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.746972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.747221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.747253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.747461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.747493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.747689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.747722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.747975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.748007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.748201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.748232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.748452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.748484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.748743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.748776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.749081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.749113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.749323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.749354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.749622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.749655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.749920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.749952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.750232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.750264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.750555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.750587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.750808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.750840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.751083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.701 [2024-10-14 17:48:11.751115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.701 qpair failed and we were unable to recover it. 00:31:12.701 [2024-10-14 17:48:11.751310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.751342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.751558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.751591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.751804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.751837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.752039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.752071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.752281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.752312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.752569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.752612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.752895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.752927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.753154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.753186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.753462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.753494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.753691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.753726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.753927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.753959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.754176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.754207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.754397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.754429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.754734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.754767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.755034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.755066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.755349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.755381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.755643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.755678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.755909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.755946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.756205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.756237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.756426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.756458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.756752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.756786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.757087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.757119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.757378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.757412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.757738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.757773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.757964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.757997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.758273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.758306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.758507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.758540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.758827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.758860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.759060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.759093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.759285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.759318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.759565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.759597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.759934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.759968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.760114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.760147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.760427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.760460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.760659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.760694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.760996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.761028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.761218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.761250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.761486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.761518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.761733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.761768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.761970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.762002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.702 qpair failed and we were unable to recover it. 00:31:12.702 [2024-10-14 17:48:11.762255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.702 [2024-10-14 17:48:11.762287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.762485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.762517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.762730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.762763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.763017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.763049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.763247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.763281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.763508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.763542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.763736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.763771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.764027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.764060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.764241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.764273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.764470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.764503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.764692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.764726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.764951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.764983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.765168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.765200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.765468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.765501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.765698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.765732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.765994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.766027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.766229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.766261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.766414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.766452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.766662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.766696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.767001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.767033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.767286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.767320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.767512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.767545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.767826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.767860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.768066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.768099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.768366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.768399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.768629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.768664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.768916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.768950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.769247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.769279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.769484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.769518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.769771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.769806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.770014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.770047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.770261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.770294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.770496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.770528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.770709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.770744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.771022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.771054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.771337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.703 [2024-10-14 17:48:11.771370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.703 qpair failed and we were unable to recover it. 00:31:12.703 [2024-10-14 17:48:11.771628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.771662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.771941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.771974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.772286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.772319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.772571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.772612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.772918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.772951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.773223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.773256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.773506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.773539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.773791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.773826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.774054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.774088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.774340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.774373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.774508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.774541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.774736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.774772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.774973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.775005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.775259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.775292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.775590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.775650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.775939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.775972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.776174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.776207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.776509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.776541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.776784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.776818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.777133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.777167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.777424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.777457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.777652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.777693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.777972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.778005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.778259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.778292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.778573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.778615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.778894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.778926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.779177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.779210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.779484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.779518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.779721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.779755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.779935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.779968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.780200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.780233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.780436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.780468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.780660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.780694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.780965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.780998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.781189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.781221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.781417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.781450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.781705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.781740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.781922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.781955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.782215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.782248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.782378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.782411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.704 qpair failed and we were unable to recover it. 00:31:12.704 [2024-10-14 17:48:11.782671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.704 [2024-10-14 17:48:11.782705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.782959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.782992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.783267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.783300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.783498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.783530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.783729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.783762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.783963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.783996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.784251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.784284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.784412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.784444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.784724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.784803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.784974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.785011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.785202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.785236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.785514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.785548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.785756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.785790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.785978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.786011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.786142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.786174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.786379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.786411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.786547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.786580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.786848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.786882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.787163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.787195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.787445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.787477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.787676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.787711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.787939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.787970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.788178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.788210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.788340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.788372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.788563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.788594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.788859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.788892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.789112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.789145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.789399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.789430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.789715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.789748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.790011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.790045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.790181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.790214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.790512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.790545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.790693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.790726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.791006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.791039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.791316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.791348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.791567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.791612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.791761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.791794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.792072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.792104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.792236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.792267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.792570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.792615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.792869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.705 [2024-10-14 17:48:11.792902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.705 qpair failed and we were unable to recover it. 00:31:12.705 [2024-10-14 17:48:11.793012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.793044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.793317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.793349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.793500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.793533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.793818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.793851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.793994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.794027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.794233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.794266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.794404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.794436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.794626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.794659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.794889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.794921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.795123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.795156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.795286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.795319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.795465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.795497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.795767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.795800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.795998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.796047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.796317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.796355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.796564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.796597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.796871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.796907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.797178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.797215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.797505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.797536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.797664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.797703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.797842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.797874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.798132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.798171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.798368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.798416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.798546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.798579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.798806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.798839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.799062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.799094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.799385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.799422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.799548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.799580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.799739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.799772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.800054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.800085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.800306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.800338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.800627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.800677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.800889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.800922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.706 [2024-10-14 17:48:11.801184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.706 [2024-10-14 17:48:11.801216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.706 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.801356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.801388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.801616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.801651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.801928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.801960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.802173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.802205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.802422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.802470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.802690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.802735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.802965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.803011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.803305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.803350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.974 [2024-10-14 17:48:11.803509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.974 [2024-10-14 17:48:11.803556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.974 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.803921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.803971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.804134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.804180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.804479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.804526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.804774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.804821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.804991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.805033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.805303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.805367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.805672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.805719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.805974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.806010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.806227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.806259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.806537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.806570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.806733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.806766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.806967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.806999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.807192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.807225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.807410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.807442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.807646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.807680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.807862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.807893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.808085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.808116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.808299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.808331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.808578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.808618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.808766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.808799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.808998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.809030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.809223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.809255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.809450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.809482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.809634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.809669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.809885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.809918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.810040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.810072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.810205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.810237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.810371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.810402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.810582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.810622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.810882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.810912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.811101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.811131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.811322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.811352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.811548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.811578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.811766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.811796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.812020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.812049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.812206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.812235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.812413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.812442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.812728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.812759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.813019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.813048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.813246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.813274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.975 qpair failed and we were unable to recover it. 00:31:12.975 [2024-10-14 17:48:11.813464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.975 [2024-10-14 17:48:11.813492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.813690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.813720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.813839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.813868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.814132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.814162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.814271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.814300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.814490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.814521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.814722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.814754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.814998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.815027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.815219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.815248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.815419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.815450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.815646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.815677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.815948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.815976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.816247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.816277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.816393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.816422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.816656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.816688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.816814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.816843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.816960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.816989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.817110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.817139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.817313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.817343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.817529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.817558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.817788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.817820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.818035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.818066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.818305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.818334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.818527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.818557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.818834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.818865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.819060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.819089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.819234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.819263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.819514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.819545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.819784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.819814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.819993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.820022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.820353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.820382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.820656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.820687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.820861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.820891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.821165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.821200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.821408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.821438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.821729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.821760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.821948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.821978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.822270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.822301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.822553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.822582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.822871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.976 [2024-10-14 17:48:11.822902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.976 qpair failed and we were unable to recover it. 00:31:12.976 [2024-10-14 17:48:11.823082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.823113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.823300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.823329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.823504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.823534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.823762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.823795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.824041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.824070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.824383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.824413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.824624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.824655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.824939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.824970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.825213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.825243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.825437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.825467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.825673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.825703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.825826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.825855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.826117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.826148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.826274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.826304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.826548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.826577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.826809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.826839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.827095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.827125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.827417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.827446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.827723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.827753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.827969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.827999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.828250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.828285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.828462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.828492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.828737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.828769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.829024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.829055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.829318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.829348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.829592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.829628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.829766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.829795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.830067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.830096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.830338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.830367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.830639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.830675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.830824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.830855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.831040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.831070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.831369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.831400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.831698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.831729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.832014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.832045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.832318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.832348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.832652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.832684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.832947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.832976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.833243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.833273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.833518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.833548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.833834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.977 [2024-10-14 17:48:11.833865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.977 qpair failed and we were unable to recover it. 00:31:12.977 [2024-10-14 17:48:11.834140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.834169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.834460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.834489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.834773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.834804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.835087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.835115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.835364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.835399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.835588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.835634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.835816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.835846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.836126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.836156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.836428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.836458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.836679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.836711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.836912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.836942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.837141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.837171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.837367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.837396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.837578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.837614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.837735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.837765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.837960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.837989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.838280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.838310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.838629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.838661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.838912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.838942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.839211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.839241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.839501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.839534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.839717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.839749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.840028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.840058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.840325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.840354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.840657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.840689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.840911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.840941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.841209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.841239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.841438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.841468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.841737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.841768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.841959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.841997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.842572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.842618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.842850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.842884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.843166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.843196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.843448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.843477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.843789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.843821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.844017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.844047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.844266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.844300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.844550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.844579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.844865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.844896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.845174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.845204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.845482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.978 [2024-10-14 17:48:11.845512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.978 qpair failed and we were unable to recover it. 00:31:12.978 [2024-10-14 17:48:11.845800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.845832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.846081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.846111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.846357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.846386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.846696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.846727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.846851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.846881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.847066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:11.847096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:11.847358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.005119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.005465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.005505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.005755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.005789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.006085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.006116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.006334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.006365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.006573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.006613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.006804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.006834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.007016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.007047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.007229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.007261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.007534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.007566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.007843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.007877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.008148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.008183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.008338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.008369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.008648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.008681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.008962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.008996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.009200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.009232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.009488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.009520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.009653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.009687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.009860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.009891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.010085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.010117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.010306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.010336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.010548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.010580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.010802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.010836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.010956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.010987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.011220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.011253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.011486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.011517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.011690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.011724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.011924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.011962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.979 qpair failed and we were unable to recover it. 00:31:12.979 [2024-10-14 17:48:12.012097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.979 [2024-10-14 17:48:12.012130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.012411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.012443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.012720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.012755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.012877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.012909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.013113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.013145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.013349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.013380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.013624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.013659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.013898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.013930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.014193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.014225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.014469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.014501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.014685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.014719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.014983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.015014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.015194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.015227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.015447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.015480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.015662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.015696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.015900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.015932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.016221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.016253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.016490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.016522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.016741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.016774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.017036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.017068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.017274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.017306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.017488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.017520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.017703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.017737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.018031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.018062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.018340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.018372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.018577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.018619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.018833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.018869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.019159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.019191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.019436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.019467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.019671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.019716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.019908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.019940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.020131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.020163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.020279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.020311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.020568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.020608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.020848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.020881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.021090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.021121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.021388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.021420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.021609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.021643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.021934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.021965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.022151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.022183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.980 [2024-10-14 17:48:12.022453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.980 [2024-10-14 17:48:12.022524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:12.980 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.022835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.022905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.023142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.023177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.023443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.023476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.023754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.023788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.023976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.024007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.024179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.024210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.024445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.024475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.024591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.024630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.024907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.024937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.025124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.025155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.025421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.025452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.025687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.025719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.025955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.025995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.026188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.026218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.026474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.026504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.026790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.026823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.027035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.027065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.027356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.027387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.027653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.027685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.027872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.027904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.028169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.028199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.028410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.028441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.028568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.028599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.028817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.028848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.028985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.029016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.029343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.029374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.029516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.029547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.029727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.029759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.030011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.030042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.030168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.030199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.030458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.030490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.030691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.030724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.030862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.030892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.031128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.031160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.031438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.031469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.031596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.031634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.031758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.031790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.031920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.031951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.032191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.032221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.032405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.981 [2024-10-14 17:48:12.032437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.981 qpair failed and we were unable to recover it. 00:31:12.981 [2024-10-14 17:48:12.032669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.032702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.032943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.032974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.033161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.033192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.033478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.033508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.033788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.033821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.034024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.034056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.034309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.034339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.034517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.034549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.034825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.034859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.035033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.035064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.035209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.035239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.035501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.035533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.035747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.035785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.036066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.036097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.036401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.036432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.036690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.036723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.036916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.036947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.037204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.037234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.037350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.037380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.037518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.037549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.037729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.037761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.037950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.037981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.038261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.038292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.038536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.038566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.038846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.038878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.039060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.039090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.039334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.039365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.039493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.039523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.039768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.039801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.040087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.040118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.040247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.040277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.040448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.040480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.040667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.040699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.040851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.040882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.041117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.041149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.041386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.041417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.041690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.041722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.041898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.041929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.042194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.042225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.982 [2024-10-14 17:48:12.042434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.982 [2024-10-14 17:48:12.042466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.982 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.042663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.042694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.042879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.042909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.043096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.043128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.043310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.043345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.043515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.043546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.043745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.043778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.044017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.044048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.044222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.044251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.044485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.044516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.044698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.044732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.044996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.045026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.045213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.045244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.045502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.045539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.045727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.045759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.045931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.045962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.046138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.046169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.046291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.046321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.046439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.046469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.046671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.046703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.046824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.046855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.047030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.047061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.047349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.047380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.047566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.047596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.047716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.047753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.047944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.047975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.048171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.048201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.048379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.048411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.048526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.048557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.048775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.048809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.048989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.049020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.049224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.049255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.049436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.049467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.049730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.049762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.049895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.049925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.050060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.983 [2024-10-14 17:48:12.050091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.983 qpair failed and we were unable to recover it. 00:31:12.983 [2024-10-14 17:48:12.050211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.050241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.050433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.050464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.050573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.050613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.050858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.050890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.051120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.051191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.051337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.051374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.051581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.051633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.051760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.051792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.052031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.052063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.052375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.052406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.052540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.052572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.052773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.052806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.053068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.053100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.053296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.053327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.053564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.053596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.053781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.053812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.054069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.054100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.054227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.054259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.054504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.054538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.054683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.054717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.054901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.054933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.055065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.055097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.055265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.055296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.055558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.055591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.055808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.055840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.056080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.056113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.056242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.056274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.056397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.056427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.056692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.056727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.056835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.056867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.057076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.057108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.057295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.057334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.057517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.057549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.057802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.057833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.057951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.057983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.058177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.058209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.058389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.058421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.058655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.058689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.058821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.058852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.984 qpair failed and we were unable to recover it. 00:31:12.984 [2024-10-14 17:48:12.059042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.984 [2024-10-14 17:48:12.059073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.059186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.059217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.059393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.059425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.059687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.059721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.059897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.059929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.060099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.060130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.060318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.060349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.060533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.060565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.060761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.060794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.060984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.061016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.061255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.061286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.061528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.061559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.061737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.061770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.061956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.061989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.062176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.062207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.062325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.062355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.062521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.062553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.062737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.062770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.063027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.063058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.063295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.063332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.063598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.063642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.063761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.063792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.063978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.064009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.064262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.064293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.064416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.064446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.064631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.064665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.064852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.064884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.065124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.065156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.065331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.065363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.065482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.065514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.065752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.065784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.065954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.065984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.066100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.066132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.066354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.066385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.066562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.066594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.985 [2024-10-14 17:48:12.066728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.985 [2024-10-14 17:48:12.066759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.985 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.066929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.066961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.067139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.067170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.067410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.067442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.067629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.067662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.067843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.067875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.068110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.068141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.068333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.068364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.068564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.068594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.068798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.068830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.069085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.069116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.069223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.069261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.069443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.069476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.069663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.069696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.069875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.069906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.070145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.070178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.070297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.070329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.070587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.070626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.070731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.070763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.070962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.070993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.071168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.071199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.071376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.071406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.071619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.071652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.071907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.071940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.072076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.072109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.072241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.072272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.072449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.072479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.072743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.072775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.072992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.073134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.073278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.073426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.073700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.073859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.073890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.074128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.074160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.074343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.074374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.074575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.986 [2024-10-14 17:48:12.074616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.986 qpair failed and we were unable to recover it. 00:31:12.986 [2024-10-14 17:48:12.074749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.074780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.074973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.075005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.075280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.075312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.075497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.075529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.075709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.075742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.075913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.075945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.076137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.076169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.076343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.076374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.076511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.076543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.076755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.076788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.076976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.077008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.077128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.077160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.077420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.077452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.077640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.077674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.077941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.077972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.078218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.078251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.078384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.078414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.078547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.078579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.078782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.078814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.078992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.079145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.079350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.079522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.079696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.079924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.079955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.080070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.080101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.080282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.080314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.080493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.080526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.080786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.080820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.081018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.081050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.081235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.081267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.081440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.081470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.081642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.081675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.081882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.081914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.082150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.082181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.082423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.082456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.082718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.082751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.082935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.082967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.987 [2024-10-14 17:48:12.083157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.987 [2024-10-14 17:48:12.083189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.987 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.083375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.083406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.083608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.083642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.083887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.083918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.084034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.084070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.084316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.084346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.084640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.084673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.084856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.084888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.085070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.085101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.085382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.085413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.085530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.085560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.085764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.085798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.085913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.085944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.086154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.086184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.086371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.086404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.086589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.086633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.086749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.086779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.086889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.086920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.087165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.087197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.087365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.087396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.087526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.087558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.087754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.087786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.088041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.088072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.088325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.088357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.088541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.088572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.088850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.088882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.089010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.089041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.089301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.089331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.089526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.988 [2024-10-14 17:48:12.089556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.988 qpair failed and we were unable to recover it. 00:31:12.988 [2024-10-14 17:48:12.089732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.089765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.089947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.089979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.090160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.090203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.090392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.090424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.090695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.090730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.090919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.090951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.091211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.091243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.091528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.091561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.091689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.091722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.091959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.091991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.092177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.092208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.092388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.092425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.092637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.092671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.092886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.092917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.093136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.093167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.093414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.093445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.093662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.093695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.093885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.093916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.094154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.094185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.094301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.094334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.094451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.094482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.094664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.094696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.094892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.094923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.095093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.095124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.095366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.095398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.095591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.095630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.095821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.095851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.096057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.096088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.096346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.096377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.096504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.096534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.096799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.096832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.097032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.097062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.097190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.097222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.097480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.097512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.097699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.097734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.097910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.097941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.098078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.098109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.098278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.098309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.098489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.098522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.098651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.989 [2024-10-14 17:48:12.098685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.989 qpair failed and we were unable to recover it. 00:31:12.989 [2024-10-14 17:48:12.098801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.098832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.098938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.098969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.099209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.099242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.099419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.099452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.099569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.099609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.099716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.099746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.099931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.099961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.100170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.100202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.100307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.100338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.100519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.100549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.100660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.100690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.100933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.100964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.101138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.101169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.101358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.101402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.101668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.101705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.101856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.101888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.102008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.102039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.102224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.102270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.102392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.102426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.102622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.102656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.102906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.102937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.103149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.103181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.103299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.103331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.103511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.103558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.103810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.103849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.104097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.104129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.104391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.104428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.104631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.104665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.104918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.104950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.105189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.105221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.105511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.105567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.105858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.105896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:12.990 [2024-10-14 17:48:12.106088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.990 [2024-10-14 17:48:12.106120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:12.990 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.106317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.106349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.106542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.106574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.106791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.106837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.107037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.107084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.107245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.107291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.270 [2024-10-14 17:48:12.107530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.270 [2024-10-14 17:48:12.107572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.270 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.107870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.107918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.108079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.108121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.108320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.108366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.108632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.108679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.108873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.108910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.109204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.109237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.109452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.109484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.109620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.109654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.109848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.109880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.110064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.110095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.110225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.110257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.110443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.110475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.110582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.110621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.110860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.110892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.111071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.111102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.111225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.111256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.111368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.111401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.111608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.111640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.111823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.111861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.112049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.112080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.112292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.112325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.112493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.112524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.112723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.112758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.112939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.112971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.113082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.113113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.113353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.113384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.113576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.113628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.113810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.113842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.113947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.113977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.114108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.114139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.114320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.114352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.114598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.114640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.114786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.114818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.115090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.115122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.115380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.115411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.115613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.115645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.115851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.115883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.116068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.116098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.116361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.116393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.116499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.271 [2024-10-14 17:48:12.116530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.271 qpair failed and we were unable to recover it. 00:31:13.271 [2024-10-14 17:48:12.116716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.116750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.116933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.116966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.117088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.117119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.117355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.117386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.117577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.117617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.117720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.117756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.117961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.117992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.118262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.118293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.118478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.118509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.118651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.118685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.118953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.118985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.119166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.119197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.119332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.119362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.119474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.119504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.119674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.119707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.119911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.119941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.120047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.120077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.120258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.120289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.120474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.120506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.120703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.120737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.120920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.120951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.121209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.121241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.121442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.121473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.121660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.121693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.121866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.121896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.122108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.122140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.122244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.122275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.122444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.122475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.122661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.122694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.122876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.122906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.123141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.123173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.123379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.123411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.123582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.123622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.123817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.123850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.123973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.124005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.124213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.124243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.124357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.124387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.124598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.124641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.124752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.124782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.125020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.125050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.272 [2024-10-14 17:48:12.125242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.272 [2024-10-14 17:48:12.125273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.272 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.125465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.125495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.125756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.125790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.126057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.126089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.126211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.126243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.126367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.126397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.126588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.126645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.126818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.126849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.127052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.127083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.127277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.127309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.127577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.127616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.127791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.127823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.128085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.128117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.128229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.128261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.128530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.128562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.128814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.128846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.129039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.129071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.129250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.129281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.129463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.129493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.129690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.129722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.129844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.129874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.130066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.130099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.130366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.130397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.130515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.130546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.130831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.130863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.131129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.131161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.131347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.131379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.131503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.131533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.131796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.131830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.131943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.131974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.132146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.132177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.132305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.132337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.132521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.132557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.132766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.132805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.133054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.133085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.133256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.133293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.133481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.133518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.133712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.133750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.133938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.133971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.134149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.134182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.134357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.134401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.273 [2024-10-14 17:48:12.134650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.273 [2024-10-14 17:48:12.134684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.273 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.134957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.134990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.135114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.135145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.135256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.135286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.135522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.135553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.135737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.135769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.135906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.135937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.136119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.136149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.136316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.136348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.136456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.136488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.136695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.136728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.136854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.136885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.137062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.137093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.137273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.137304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.137473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.137504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.137630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.137664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.137775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.137807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.138008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.138040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.138228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.138259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.138521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.138561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.138809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.138842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.138959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.138989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.139177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.139209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.139468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.139500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.139624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.139657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.139832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.139864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.139996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.140027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.140241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.140273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.140535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.140567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.140783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.140817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.140988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.141203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.141452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.141667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.141803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.141962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.141993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.142218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.142250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.142490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.142521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.142704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.142738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.142867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.142898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.143136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.143167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.274 [2024-10-14 17:48:12.143341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.274 [2024-10-14 17:48:12.143373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.274 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.143565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.143596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.143804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.143836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.144028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.144060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.144324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.144355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.144525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.144556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.144835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.144872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.145133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.145164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.145347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.145378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.145563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.145595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.145739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.145770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.145955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.145987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.146245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.146276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.146414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.146445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.146700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.146734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.146852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.146884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.147069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.147099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.147274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.147305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.147587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.147626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.147870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.147903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.148080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.148112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.148376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.148407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.148588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.148646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.148774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.148806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.148992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.149024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.149306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.149338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.149463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.149494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.149700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.149732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.149865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.149896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.150106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.150250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.150454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.150592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.150777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.275 [2024-10-14 17:48:12.150970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.275 [2024-10-14 17:48:12.151000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.275 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.151123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.151155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.151336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.151368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.151619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.151652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.151913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.151946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.152158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.152190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.152367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.152399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.152615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.152648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.152848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.152880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.153066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.153096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.153224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.153255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.153492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.153524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.153739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.153779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.153887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.153919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.154180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.154212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.154390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.154421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.154637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.154677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.154912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.154944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.155130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.155161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.155352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.155383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.155514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.155546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.155674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.155707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.155896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.155927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.156099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.156129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.156321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.156353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.156525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.156557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.156779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.156813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.157065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.157097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.157271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.157303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.157574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.157617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.157878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.157910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.158145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.158177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.158307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.158338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.158507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.158538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.158660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.158692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.158951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.158983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.159218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.159250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.159436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.159467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.159594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.159635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.159873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.159909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.160113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.160145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.276 qpair failed and we were unable to recover it. 00:31:13.276 [2024-10-14 17:48:12.160323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.276 [2024-10-14 17:48:12.160354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.160529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.160559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.160759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.160809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.161024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.161056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.161245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.161277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.161480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.161512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.161710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.161745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.161951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.161986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.162101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.162131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.162317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.162347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.162459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.162491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.162669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.162701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.162945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.162977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.163078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.163109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.163235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.163266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.163458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.163489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.163660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.163691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.163859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.163891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.164071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.164103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.164345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.164376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.164570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.164629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.164816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.164847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.165022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.165053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.165222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.165252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.165499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.165530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.165722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.165760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.165971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.166001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.166262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.166293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.166482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.166513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.166717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.166748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.167029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.167059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.167324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.167356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.167565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.167597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.167732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.167762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.168051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.168083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.168319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.168350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.168525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.168556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.168760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.168792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.169004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.169036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.169266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.169298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.169419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.277 [2024-10-14 17:48:12.169449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.277 qpair failed and we were unable to recover it. 00:31:13.277 [2024-10-14 17:48:12.169635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.169667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.169789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.169819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.169930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.169961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.170156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.170187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.170316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.170347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.170528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.170561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.170748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.170781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.170982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.171014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.171126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.171156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.171347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.171379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.171550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.171581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.171720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.171752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.171970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.172002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.172183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.172213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.172386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.172418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.172618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.172650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.172816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.172848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.172970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.173001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.173177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.173208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.173481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.173514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.173762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.173796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.174060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.174091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.174203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.174234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.174509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.174540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.174741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.174774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.174961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.174997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.175285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.175317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.175521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.175552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.175769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.175802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.176010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.176040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.176164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.176194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.176434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.176465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.176654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.176687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.176948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.176979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.177169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.177201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.177396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.177428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.177542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.177572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.177770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.177801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.177933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.177963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.178151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.178183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.178318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.278 [2024-10-14 17:48:12.178348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.278 qpair failed and we were unable to recover it. 00:31:13.278 [2024-10-14 17:48:12.178529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.178559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.178750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.178782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.178913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.178945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.179118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.179149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.179322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.179352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.179622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.179654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.179770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.179800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.179922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.179954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.180197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.180228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.180417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.180449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.180655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.180688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.180881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.180921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.181184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.181216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.181408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.181437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.181701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.181734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.181998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.182029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.182215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.182246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.182349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.182379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.182481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.182512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.182715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.182747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.182981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.183011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.183331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.183363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.183630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.183662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.183845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.183876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.184000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.184030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.184251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.184283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.184502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.184532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.184663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.184697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.184889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.184921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.185182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.185214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.185350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.185381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.185558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.185589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.185834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.185866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.186126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.186158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.186346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.186379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.186518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.186548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.279 [2024-10-14 17:48:12.186798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.279 [2024-10-14 17:48:12.186831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.279 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.187024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.187056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.187168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.187205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.187387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.187418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.187588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.187627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.187800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.187830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.188097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.188128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.188371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.188403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.188535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.188566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.188712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.188745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.188999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.189030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.189211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.189242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.189479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.189511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.189694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.189727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.189933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.189963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.190240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.190271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.190394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.190425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.190714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.190748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.191011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.191042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.191243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.191275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.191512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.191544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.191736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.191769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.191965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.191997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.192180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.192212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.192422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.192453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.192579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.192617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.192739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.192770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.192949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.192979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.193189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.193221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.193400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.193432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.193547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.193578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.193854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.193887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.194020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.194051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.194244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.194276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.194510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.194541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.194680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.194712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.194884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.194914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.195045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.195076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.195278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.195310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.195576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.195618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.195804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.195836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.280 [2024-10-14 17:48:12.196031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.280 [2024-10-14 17:48:12.196062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.280 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.196181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.196211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.196388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.196419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.196595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.196660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.196764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.196794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.196962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.196991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.197130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.197276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.197511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.197667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.197885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.197997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.198265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.198398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.198618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.198774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.198911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.198944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.199126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.199157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.199341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.199370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.199558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.199588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.199778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.199810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.200005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.200036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.200206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.200238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.200481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.200512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.200775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.200807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.200983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.201014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.201182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.201213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.201337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.201367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.201616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.201649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.201834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.201872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.202067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.202098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.202202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.202233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.202426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.202457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.202643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.202676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.202849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.202881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.203139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.203171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.203460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.203492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.203633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.203666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.203905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.203936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.204119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.204150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.204410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.204443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.281 qpair failed and we were unable to recover it. 00:31:13.281 [2024-10-14 17:48:12.204629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.281 [2024-10-14 17:48:12.204661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.204868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.204901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.205078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.205109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.205291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.205321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.205588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.205641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.205848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.205881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.206068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.206099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.206276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.206307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.206550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.206581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.206781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.206813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.206996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.207029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.207286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.207318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.207502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.207533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.207641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.207674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.207862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.207894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.208006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.208043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.208306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.208338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.208453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.208484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.208674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.208708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.208919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.208950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.209236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.209268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.209379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.209410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.209609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.209642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.209755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.209786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.209989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.210020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.210223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.210254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.210490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.210521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.210710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.210743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.210863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.210895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.211771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.211801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.212001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.212033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.212214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.212246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.212558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.212589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.212852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.212885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.213006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.213036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.213211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.213242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.282 qpair failed and we were unable to recover it. 00:31:13.282 [2024-10-14 17:48:12.213414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.282 [2024-10-14 17:48:12.213445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.213570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.213614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.213789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.213822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.214001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.214032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.214297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.214328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.214498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.214530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.214657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.214688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.214790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.214821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.215016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.215047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.215172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.215203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.215304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.215335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.215572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.215614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.215889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.215919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.216110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.216142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.216324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.216355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.216540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.216572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.216810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.216841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.217044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.217075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.217308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.217337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.217509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.217540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.217731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.217763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.217949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.217980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.218247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.218278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.218383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.218413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.218673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.218707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.218826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.218856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.218972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.219003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.219175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.219205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.219456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.219488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.219701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.219732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.219846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.219876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.219982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.220013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.220212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.220242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.220353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.220383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.220555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.220586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.220824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.220856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.283 qpair failed and we were unable to recover it. 00:31:13.283 [2024-10-14 17:48:12.221025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.283 [2024-10-14 17:48:12.221055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.221172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.221202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.221448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.221479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.221743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.221775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.221894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.221924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.222120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.222150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.222369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.222405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.222671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.222704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.222922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.222954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.223146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.223176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.223360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.223391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.223634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.223668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.223845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.223875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.224082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.224112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.224304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.224334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.224521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.224552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.224738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.224772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.224962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.224993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.225168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.225198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.225394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.225423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.225615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.225647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.225839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.225871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.226110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.226141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.226372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.226403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.226571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.226611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.226731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.226762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.226998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.227029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.227266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.227296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.227474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.227504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.227624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.227661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.227864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.227899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.228034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.228065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.228266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.228296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.228537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.228575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.228697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.228728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.228916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.228947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.229124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.229155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.229386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.229416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.229612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.229644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.229760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.229790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.229909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.284 [2024-10-14 17:48:12.229939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.284 qpair failed and we were unable to recover it. 00:31:13.284 [2024-10-14 17:48:12.230128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.230158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.230361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.230391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.230581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.230619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.230756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.230787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.230958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.230990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.231167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.231198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.231333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.231364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.231546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.231579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.231833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.231864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.231971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.232002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.232140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.232170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.232338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.232369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.232549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.232581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.232844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.232875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.232998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.233028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.233210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.233240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.233487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.233516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.233692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.233725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.233962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.233993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.234199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.234237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.234496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.234527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.234790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.234823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.234952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.234983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.235096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.235126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.235318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.235347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.235533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.235563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.235793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.235825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.236059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.236091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.236215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.236246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.236370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.236401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.236570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.236606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.236813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.236845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.237030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.237061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.237186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.237216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.237432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.237462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.237702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.237736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.237938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.237969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.238083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.238112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.238399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.238431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.238633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.238666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.238907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.285 [2024-10-14 17:48:12.238939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.285 qpair failed and we were unable to recover it. 00:31:13.285 [2024-10-14 17:48:12.239110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.239140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.239331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.239362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.239572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.239611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.239797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.239829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.240012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.240042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.240219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.240249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.240495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.240526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.240702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.240733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.240973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.241004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.241120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.241150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.241389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.241420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.241657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.241691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.241940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.241971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.242177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.242209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.242423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.242455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.242715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.242748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.242939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.242970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.243100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.243130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.243334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.243365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.243553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.243584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.243695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.243727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.243918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.243949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.244082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.244112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.244348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.244380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.244497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.244528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.244631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.244663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.244872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.244902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.245093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.245123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.245362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.245394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.245617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.245649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.245788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.245820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.246019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.246049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.246220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.246250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.246549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.246580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.246773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.246805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.246926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.246956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.247157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.247189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.247322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.247354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.247595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.247639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.247899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.247930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.248102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.286 [2024-10-14 17:48:12.248134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.286 qpair failed and we were unable to recover it. 00:31:13.286 [2024-10-14 17:48:12.248305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.248336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.248551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.248582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.248887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.248920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.249131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.249162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.249368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.249400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.249516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.249552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.249680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.249713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.249905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.249936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.250056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.250086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.250319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.250351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.250567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.250598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.250891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.250922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.251091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.251121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.251327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.251358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.251529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.251560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.251684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.251715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.251841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.251872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.252060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.252091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.252258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.252288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.252555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.252587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.252744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.252776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.252957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.252987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.253169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.253201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.253386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.253417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.253680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.253713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.253975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.254006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.254196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.254228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.254330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.254361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.254565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.254596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.254870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.254902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.255021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.255051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.255233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.255264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.255445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.255482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.255618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.255650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.255892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.255923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.256160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.256191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.256372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.256402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.256578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.256633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.256824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.256854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.257059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.257091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.287 [2024-10-14 17:48:12.257309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.287 [2024-10-14 17:48:12.257340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.287 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.257613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.257646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.257859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.257891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.258016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.258047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.258301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.258332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.258596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.258638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.258904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.258936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.259052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.259084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.259292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.259323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.259432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.259463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.259578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.259619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.259803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.259835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.260042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.260073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.260312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.260344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.260519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.260549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.260674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.260705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.260974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.261192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.261405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.261552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.261728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.261953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.261985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.262248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.262281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.262466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.262497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.262619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.262653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.262753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.262784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.262892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.262925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.263029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.263059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.263239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.263272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.263460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.263491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.263706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.263740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.263928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.263960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.264149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.264180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.264386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.264417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.264595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.264636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.264763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.288 [2024-10-14 17:48:12.264793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.288 qpair failed and we were unable to recover it. 00:31:13.288 [2024-10-14 17:48:12.265079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.265112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.265359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.265391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.265518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.265550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.265739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.265771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.265974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.266138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.266367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.266571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.266733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.266935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.266966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.267084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.267115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.267298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.267331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.267513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.267545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.267810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.267844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.268101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.268133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.268254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.268285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.268575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.268616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.268812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.268843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.269023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.269054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.269308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.269339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.269580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.269623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.269885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.269917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.270030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.270060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.270180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.270210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.270384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.270426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.270668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.270702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.270884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.270916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.271027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.271057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.271235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.271267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.271473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.271505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.271677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.271710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.271948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.271979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.272260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.272293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.272462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.272494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.272761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.272793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.273053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.273084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.273259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.273291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.273484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.273516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.273769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.273803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.274019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.274051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.274237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.289 [2024-10-14 17:48:12.274268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.289 qpair failed and we were unable to recover it. 00:31:13.289 [2024-10-14 17:48:12.274395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.274426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.274667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.274701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.274833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.274865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.274985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.275196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.275340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.275493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.275730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.275935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.275966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.276228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.276260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.276496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.276533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.276666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.276698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.276863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.276895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.277004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.277036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.277220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.277252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.277439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.277471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.277790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.277823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.278091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.278123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.278246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.278277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.278468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.278500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.278689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.278722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.278977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.279009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.279181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.279213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.279348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.279379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.279564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.279595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.279812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.279844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.280080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.280112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.280294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.280325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.280523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.280555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.280685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.280718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.280823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.280853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.281060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.281092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.281356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.281388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.281521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.281553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.281822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.281855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.281981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.282012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.282129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.282159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.282343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.282380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.282642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.282676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.282867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.282899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.290 [2024-10-14 17:48:12.283014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.290 [2024-10-14 17:48:12.283045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.290 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.283217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.283248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.283536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.283568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.283764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.283796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.283982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.284012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.284227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.284258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.284451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.284483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.284594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.284633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.284835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.284867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.285049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.285080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.285330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.285361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.285624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.285658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.285917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.285948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.286213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.286245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.286492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.286523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.286652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.286686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.286969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.287001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.287176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.287208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.287324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.287355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.287537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.287568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.287816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.287848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.288021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.288053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.288223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.288255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.288361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.288391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.288563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.288595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.288854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.288887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.289088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.289120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.289221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.289253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.289518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.289550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.289751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.289783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.290081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.290246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.290520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.290683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.290894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.290995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.291026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.291312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.291344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.291594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.291634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.291850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.291882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.292125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.292157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.292363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.292395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.291 [2024-10-14 17:48:12.292583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.291 [2024-10-14 17:48:12.292623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.291 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.292860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.292892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.293100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.293131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.293373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.293405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.293639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.293672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.293799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.293830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.293943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.293973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.294239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.294271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.294453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.294485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.294696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.294730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.294912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.294943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.295064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.295095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.295196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.295228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.295406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.295439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.295619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.295651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.295845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.295877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.296034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.296255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.296484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.296709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.296852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.296970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.297002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.297174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.297205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.297379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.297409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.297581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.297625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.297891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.297923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.298129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.298161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.298348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.298380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.298566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.298597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.298867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.298899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.299080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.299111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.299237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.299267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.299453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.299484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.299742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.299774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.299955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.299985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.300176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.300208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.300333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.292 [2024-10-14 17:48:12.300365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.292 qpair failed and we were unable to recover it. 00:31:13.292 [2024-10-14 17:48:12.300487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.300518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.300706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.300738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.300921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.300953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.301070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.301102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.301287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.301318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.301524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.301554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.301809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.301840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.302921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.302951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.303076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.303107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.303366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.303403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.303588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.303631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.303821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.303853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.304020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.304051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.304263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.304294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.304521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.304552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.304748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.304780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.304978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.305008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.305260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.305292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.305533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.305565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.305842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.305875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.306055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.306086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.306346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.306377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.306580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.306621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.306744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.306775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.307020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.307051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.307313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.307345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.307515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.307546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.307726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.307758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.308038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.308070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.308274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.308305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.308532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.308563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.308848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.308882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.308997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.309027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.309281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.293 [2024-10-14 17:48:12.309312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.293 qpair failed and we were unable to recover it. 00:31:13.293 [2024-10-14 17:48:12.309517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.309548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.309789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.309822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.309990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.310021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.310196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.310227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.310462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.310492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.310683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.310715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.310842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.310874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.311141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.311172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.311380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.311411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.311649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.311682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.311801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.311832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.312003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.312033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.312153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.312184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.312384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.312416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.312609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.312641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.312827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.312859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.313155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.313187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.313379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.313410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.313546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.313576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.313849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.313880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.314147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.314179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.314437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.314469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.314645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.314678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.314892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.315028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.315058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.315327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.315359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.315479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.315511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.315708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.315743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.315863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.315894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.316132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.316162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.316353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.316385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.316636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.316669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.316808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.316840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.317089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.317121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.317306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.317337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.317566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.317598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.317887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.317919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.318103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.318135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.318433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.318465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.318708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.318741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.318937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.294 [2024-10-14 17:48:12.318968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.294 qpair failed and we were unable to recover it. 00:31:13.294 [2024-10-14 17:48:12.319139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.319170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.319457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.319489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.319640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.319678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.319970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.320003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.320174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.320206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.320469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.320501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.320633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.320665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.320934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.320967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.321139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.321169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.321415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.321447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.321579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.321617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.321784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.321816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.321998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.322030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.322199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.322231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.322418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.322449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.322639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.322672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.322867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.322898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.323086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.323118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.323225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.323257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.323502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.323534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.323790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.323823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.324081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.324113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.324231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.324261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.324439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.324470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.324664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.324697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.324936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.324968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.325136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.325169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.325433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.325464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.325598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.325639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.325816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.325853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.326025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.326056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.326222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.326253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.326492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.326523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.326708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.326748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.326939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.326970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.327210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.327242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.327481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.327512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.327747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.327780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.327909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.327940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.328208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.328240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.328344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.295 [2024-10-14 17:48:12.328376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.295 qpair failed and we were unable to recover it. 00:31:13.295 [2024-10-14 17:48:12.328507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.328537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.328722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.328755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.328881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.328914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.329091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.329122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.329239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.329270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.329530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.329561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.329743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.329775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.329899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.329929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.330050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.330078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.330260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.330292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.330487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.330519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.330725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.330757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.330927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.330959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.331153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.331184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.331378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.331408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.331643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.331701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.331901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.331932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.332113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.332144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.332390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.332421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.332619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.332652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.332832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.332863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.333084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.333365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.333570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.333740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.333877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.333993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.334023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.334285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.334317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.334449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.334480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.334673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.334708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.334893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.334925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.335115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.335147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.335352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.335384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.335569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.335607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.335793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.335823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.296 [2024-10-14 17:48:12.335999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.296 [2024-10-14 17:48:12.336029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.296 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.336149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.336180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.336285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.336315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.336523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.336554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.336731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.336764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.336935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.336967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.337224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.337255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.337514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.337546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.337741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.337774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.337904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.337937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.338041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.338072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.338265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.338297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.338564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.338596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.338850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.338882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.339067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.339099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.339271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.339303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.339591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.339647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.339840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.339872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.340109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.340141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.340251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.340282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.340455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.340485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.340752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.340786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.340913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.340945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.341152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.341183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.341299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.341331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.341535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.341566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.341816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.341848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.342110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.342141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.342265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.342296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.342424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.342455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.342569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.342610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.342800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.342831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.343045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.343077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.343245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.343275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.343448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.343478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.343588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.343647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.343826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.343858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.344073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.344103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.344296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.344326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.344513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.344544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.344682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.344715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.297 qpair failed and we were unable to recover it. 00:31:13.297 [2024-10-14 17:48:12.344902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.297 [2024-10-14 17:48:12.344933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.345122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.345153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.345390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.345421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.345612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.345645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.345769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.345799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.345930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.345962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.346248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.346281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.346408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.346444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.346640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.346673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.346794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.346825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.347021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.347052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.347222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.347254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.347505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.347536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.347639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.347671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.347868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.347898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.348036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.348068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.348331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.348364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.348545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.348577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.348801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.348834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.348966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.348997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.349126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.349156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.349422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.349454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.349698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.349732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.349993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.350024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.350214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.350246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.350415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.350447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.350719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.350754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.350952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.350983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.351101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.351134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.351251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.351282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.351540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.351571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.351802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.351872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.352018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.352053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.352190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.352223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.352417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.352459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.352697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.352730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.352920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.352951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.353129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.353160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.353421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.353452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.353736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.353768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.353945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.298 [2024-10-14 17:48:12.353975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.298 qpair failed and we were unable to recover it. 00:31:13.298 [2024-10-14 17:48:12.354092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.354124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.354312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.354345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.354537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.354567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.354828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.354861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.355125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.355157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.355430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.355462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.355580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.355622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.355820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.355853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.356115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.356146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.356349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.356381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.356644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.356678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.356793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.356824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.356952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.356984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.357249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.357298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.357529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.357560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.357805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.357837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.358027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.358058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.358263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.358295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.358414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.358445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.358727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.358760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.358813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249fbb0 (9): Bad file descriptor 00:31:13.299 [2024-10-14 17:48:12.359047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.359115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.359387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.359422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.359697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.359728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.359909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.359941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.360079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.360110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.360311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.360342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.360513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.360543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.360723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.360754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.360996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.361028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.361160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.361192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.361377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.361409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.361649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.361682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.361851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.361881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.362157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.362196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.362375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.362407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.362588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.362633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.362752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.362783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.362993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.363025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.363203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.363234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.363500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.363531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.299 qpair failed and we were unable to recover it. 00:31:13.299 [2024-10-14 17:48:12.363740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.299 [2024-10-14 17:48:12.363773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.363891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.363921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.364165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.364196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.364328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.364359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.364623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.364657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.364844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.364874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.365004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.365044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.365227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.365258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.365550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.365581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.365780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.365812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.365985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.366015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.366194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.366226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.366409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.366440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.366695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.366727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.366912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.366943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.367071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.367103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.367217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.367248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.367377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.367407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.367597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.367639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.367835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.367865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.368064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.368100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.368229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.368261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.368449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.368480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.368654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.368687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.368805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.368837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.369010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.369041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.369297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.369328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.369614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.369646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.369760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.369792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.369900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.369931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.370048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.370079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.370325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.370356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.370528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.370559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.370819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.370888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.371078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.371114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.371381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.371412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.371528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.371559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.371778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.371810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.371934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.371966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.300 qpair failed and we were unable to recover it. 00:31:13.300 [2024-10-14 17:48:12.372206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.300 [2024-10-14 17:48:12.372236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.372448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.372480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.372728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.372761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.373027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.373057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.373240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.373270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.373441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.373473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.373746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.373778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.373967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.374003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.374186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.374218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.374402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.374433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.374567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.374599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.374794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.374826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.375090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.375120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.375235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.375267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.375455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.375486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.375753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.375786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.375971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.376129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.376329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.376563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1267157 Killed "${NVMF_APP[@]}" "$@" 00:31:13.301 [2024-10-14 17:48:12.376795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.376951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.376982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.377225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.377256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:13.301 [2024-10-14 17:48:12.377494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.377526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.377736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.377769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:13.301 [2024-10-14 17:48:12.377897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.377928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.378055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.378086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:13.301 [2024-10-14 17:48:12.378199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.378230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.378340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.378372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.301 [2024-10-14 17:48:12.378547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.378578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.301 [2024-10-14 17:48:12.378803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.378835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.379029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.379067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.379257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.379288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.379553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.379585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.301 [2024-10-14 17:48:12.379842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.301 [2024-10-14 17:48:12.379874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.301 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.379986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.380020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.380209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.380241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.380459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.380489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.380611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.380643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.380813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.380845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.380985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.381017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.381258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.381289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.381526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.381557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.381736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.381770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.381902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.381934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.382124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.382157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.382328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.382359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.382548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.382578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.382789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.382821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.382997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.383231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.383376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.383538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.383685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.383888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.383919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.384155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.384186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.384379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.384410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.384624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.384656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.384869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.384912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.385046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.385078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.385268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.385300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.385479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.385511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.385646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.385679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.385789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.385822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1268258 00:31:13.302 [2024-10-14 17:48:12.386010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.386043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1268258 00:31:13.302 [2024-10-14 17:48:12.386231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:13.302 [2024-10-14 17:48:12.386265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.386493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.386525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1268258 ']' 00:31:13.302 [2024-10-14 17:48:12.386645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.386680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 [2024-10-14 17:48:12.386857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.386888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.302 [2024-10-14 17:48:12.387163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.387199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.302 [2024-10-14 17:48:12.387391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.387424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.302 qpair failed and we were unable to recover it. 00:31:13.302 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.302 [2024-10-14 17:48:12.387632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.302 [2024-10-14 17:48:12.387666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.387799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.303 [2024-10-14 17:48:12.387835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.387958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.387990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.303 [2024-10-14 17:48:12.388111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.388144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.388320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.388352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.388464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.388495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.388596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.388636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.388838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.388868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.389059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.389090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.389296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.389349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.389561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.389598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.389879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.389911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.390104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.390136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.390242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.390274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.390400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.390432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.303 [2024-10-14 17:48:12.390556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.303 [2024-10-14 17:48:12.390589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.303 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.390776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.390809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.390981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.391123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.391273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.391425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.391614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.391891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.391933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.392051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.392083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.392285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.392318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.392433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.392464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.392593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.392636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.392883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.392914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.393027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.393058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.393301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.393333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.393521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.393553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.393680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.393713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.393886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.393917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.394119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.394151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.394334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.394366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.394492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.394523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.394803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.590 [2024-10-14 17:48:12.394838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.590 qpair failed and we were unable to recover it. 00:31:13.590 [2024-10-14 17:48:12.394973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.395137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.395358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.395560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.395712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.395860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.395898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.396954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.396986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.397125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.397181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.397299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.397336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.397610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.397643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.397831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.397863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.398062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.398287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.398426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.398593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.398809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.398986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.399138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.399352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.399499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.399745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.399902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.399934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.400061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.400092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.400277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.400308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.400552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.400583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.400709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.400740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.400922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.400953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.401075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.401107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.401288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.401319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.401444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.401476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.401693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.401727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.401915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.401946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.402128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.402159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.402329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.402359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.591 [2024-10-14 17:48:12.402487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.591 [2024-10-14 17:48:12.402519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.591 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.402689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.402722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.402836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.402868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.403000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.403032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.403270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.403301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.403558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.403588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.403704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.403736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.403905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.403935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.404044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.404076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.404270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.404302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.404428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.404461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.404636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.404669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.404840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.404871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.405050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.405087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.405204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.405235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.405367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.405398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.405533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.405564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.405829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.405863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.406075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.406240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.406442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.406656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.406803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.406981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.407013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.407184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.407214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.407323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.407353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.407474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.407505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.407752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.407785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.407981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.408115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.408255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.408472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.408678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.408881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.408912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.409097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.409129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.409317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.409348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.409450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.409481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.409585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.409625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.409895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.409926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.410118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.410149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.410328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.592 [2024-10-14 17:48:12.410360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.592 qpair failed and we were unable to recover it. 00:31:13.592 [2024-10-14 17:48:12.410496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.410527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.410700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.410734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.410877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.410907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.411095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.411125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.411303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.411333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.411504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.411535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.411716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.411748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.411918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.411949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.412089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.412119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.412315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.412346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.412548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.412579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.412721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.412752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.412946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.412984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.413226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.413257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.413429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.413460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.413726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.413760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.414016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.414048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.414226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.414256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.414423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.414454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.414557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.414588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.414785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.414816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.415060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.415091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.415203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.415235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.415408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.415438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.415644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.415676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.415919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.415951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.416222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.416255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.416381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.416412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.416531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.416561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.416674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.416706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.416945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.416976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.417117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.417147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.417313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.417345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.417553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.417584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.417784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.417815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.417993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.418025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.418150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.418180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.418282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.418314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.418446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.418478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.418620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.593 [2024-10-14 17:48:12.418653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.593 qpair failed and we were unable to recover it. 00:31:13.593 [2024-10-14 17:48:12.418843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.418874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.418988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.419019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.419270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.419301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.419477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.419509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.419686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.419720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.419904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.419936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.420174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.420205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.420397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.420433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.420536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.420566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.420755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.420786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.420920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.420951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.421082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.421113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.421298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.421343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.421524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.421555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.421684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.421716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.421888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.421919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.422104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.422135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.422351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.422382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.422561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.422592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.422714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.422746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.422867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.422899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.423022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.423052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.423222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.423254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.423364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.423395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.423580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.423622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.423830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.423861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.424043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.424074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.424259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.424290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.424552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.424583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.424767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.424798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.424970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.425001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.425118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.425149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.425334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.594 [2024-10-14 17:48:12.425364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.594 qpair failed and we were unable to recover it. 00:31:13.594 [2024-10-14 17:48:12.425544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.425575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.425775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.425806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.425979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.426010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.426124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.426154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.426320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.426351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.426472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.426502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.426644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.426677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.426976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.427189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.427339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.427495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.427700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.427868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.427899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.428044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.428248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.428409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.428641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.428866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.428981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.429012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.429113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.429150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.429364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.429395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.429593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.429632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.429753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.429785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.430954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.430986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.431105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.431140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.431248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.431277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.431446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.431476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.431734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.431768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.431947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.431977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.432100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.432131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.432337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.432368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.432555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.432586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.432714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.432745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.432863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.432894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.433067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.433099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.595 [2024-10-14 17:48:12.433281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.595 [2024-10-14 17:48:12.433313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.595 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.433504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.433535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.433727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.433759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.433873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.433904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.434106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.434138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.434251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.434283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.434459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.434489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.434621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.434654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.434843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.434879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.435075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.435106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.435350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.435381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.435517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.435547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.435794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.435827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.436063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.436303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.436443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.436677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.436807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.436973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.437195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.437318] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:31:13.596 [2024-10-14 17:48:12.437353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437371] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.596 [2024-10-14 17:48:12.437385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.437571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.437804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.437965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.437994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.438168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.438198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.438475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.438506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.438765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.438797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.438987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.439018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.439140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.439173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.439440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.439472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.439714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.439747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.439871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.439908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.440083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.440115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.440240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.440273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.440546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.440576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.440770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.440802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.440929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.440961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.441093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.441126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.441230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.441262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.441481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.596 [2024-10-14 17:48:12.441512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.596 qpair failed and we were unable to recover it. 00:31:13.596 [2024-10-14 17:48:12.441648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.441682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.441788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.441830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.442000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.442032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.442157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.442188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.442382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.442413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.442615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.442648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.442829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.442859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.443031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.443062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.443184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.443215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.443399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.443429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.443611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.443643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.443851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.443882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.444832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.444864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.445184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.445256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.445475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.445511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.445628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.445664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.445866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.445899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.446028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.446067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.446356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.446388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.446578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.446621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.446874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.446905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.447089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.447119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.447359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.447390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.447563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.447595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.447735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.447767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.447949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.447980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.448098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.448139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.448322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.448353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.448468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.448501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.448679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.448712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.448904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.448936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.449127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.449158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.449347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.449379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.449622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.449655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.449854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.597 [2024-10-14 17:48:12.449885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.597 qpair failed and we were unable to recover it. 00:31:13.597 [2024-10-14 17:48:12.450060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.450204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.450423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.450573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.450737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.450954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.450986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.451158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.451191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.451370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.451402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.451596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.451638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.451843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.451875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.451999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.452043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.452249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.452281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.452461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.452493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.452625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.452659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.452847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.452879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.453011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.453042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.453253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.453286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.453538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.453569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.453706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.453742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.453932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.453964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.454085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.454117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.454238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.454270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.454389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.454420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.454591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.454631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.454746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.454777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.455053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.455324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.455472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.455628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.455850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.455978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.456010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.456194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.456231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.456443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.456477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.456696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.456730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.456903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.456934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.457173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.457205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.457467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.457498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.457613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.457645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.457836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.457867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.457988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.458020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.598 [2024-10-14 17:48:12.458284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.598 [2024-10-14 17:48:12.458315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.598 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.458557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.458588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.458780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.458813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.459133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.459294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.459522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.459662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.459865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.459996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.460207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.460425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.460577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.460728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.460863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.460895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.461094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.461125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.461318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.461348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.461544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.461577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.461771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.461803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.462963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.462995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.463263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.463295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.463554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.463586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.463788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.463821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.464013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.464046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.464288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.464320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.464502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.464534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.464708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.464740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.464933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.464964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.465169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.465202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.465327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.465361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.465625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.465659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.465849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.599 [2024-10-14 17:48:12.465881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.599 qpair failed and we were unable to recover it. 00:31:13.599 [2024-10-14 17:48:12.465987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.466203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.466341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.466567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.466747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.466946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.466978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.467214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.467246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.467365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.467397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.467506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.467536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.467723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.467763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.467875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.467908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.468109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.468140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.468312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.468342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.468527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.468559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.468695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.468727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.468963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.468995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.469188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.469218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.469398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.469430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.469620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.469654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.469877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.469909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.470026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.470058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.470250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.470282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.470461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.470493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.470717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.470752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.470869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.470901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.471077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.471110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.471291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.471323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.471429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.471459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.471701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.471734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.471913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.471946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.472126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.472157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.472326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.472359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.472478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.472510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.472703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.472736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.472868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.472898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.473067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.473099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.473337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.473375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.473637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.473670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.473803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.473835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.474003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.474035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.474218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.600 [2024-10-14 17:48:12.474249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.600 qpair failed and we were unable to recover it. 00:31:13.600 [2024-10-14 17:48:12.474372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.474403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.474670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.474702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.474914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.474946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.475061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.475093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.475204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.475236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.475473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.475505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.475681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.475715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.475911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.475943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.476139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.476170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.476408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.476441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.476623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.476656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.476863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.476895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.477016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.477047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.477315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.477346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.477467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.477498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.477670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.477703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.477946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.477977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.478094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.478126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.478297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.478329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.478614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.478647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.478768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.478799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.478915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.478947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.479062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.479094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.479313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.479345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.479525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.479556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.479700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.479733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.479907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.479939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.480124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.480156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.480264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.480295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.480465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.480497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.480692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.480726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.480912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.480943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.481131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.481163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.481273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.481304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.481430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.481462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.481646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.481679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.481855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.481926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.482232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.482266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.482390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.482422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.482545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.482576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.601 [2024-10-14 17:48:12.482693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.601 [2024-10-14 17:48:12.482724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.601 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.482904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.482934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.483102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.483134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.483302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.483332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.483468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.483499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.483679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.483713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.483970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.484002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.484244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.484275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.484458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.484489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.484690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.484731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.484918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.484950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.485084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.485115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.485298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.485329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.485525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.485557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.485706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.485738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.485909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.485941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.486187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.486217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.486499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.486529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.486720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.486753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.486963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.486994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.487104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.487135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.487239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.487269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.487505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.487537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.487673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.487707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.487811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.487842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.488030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.488061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.488236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.488268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.488453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.488485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.488675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.488709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.488900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.488931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.489061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.489093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.489302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.489333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.489521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.489553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.489682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.489715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.489849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.489879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.490090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.490121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.490406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.490439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.490682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.490715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.490929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.490960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.491087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.491118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.491307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.602 [2024-10-14 17:48:12.491338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.602 qpair failed and we were unable to recover it. 00:31:13.602 [2024-10-14 17:48:12.491460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.491490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.491617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.491649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.491915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.491946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.492052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.492083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.492325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.492356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.492525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.492556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.492746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.492779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.492904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.492935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.493104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.493139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.493317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.493348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.493617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.493650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.493774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.493805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.493986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.494018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.494170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.494200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.494377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.494407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.494667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.494701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.494892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.494923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.495139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.495170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.495434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.495465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.495588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.495648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.495838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.495870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.496070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.496101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.496300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.496331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.496482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.496513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.496724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.496757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.497016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.497047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.497231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.497262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.497434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.497466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.497696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.497729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.497910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.497940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.498112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.498143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.498344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.498376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.498622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.498654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.498893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.498924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.499106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.499137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.499388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.499458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.603 qpair failed and we were unable to recover it. 00:31:13.603 [2024-10-14 17:48:12.499665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.603 [2024-10-14 17:48:12.499703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.499816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.499849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.499965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.499997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.500230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.500260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.500395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.500428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.500563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.500596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.500791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.500824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.500961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.500994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.501243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.501274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.501465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.501498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.501624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.501657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.501765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.501796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.501937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.501979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.502955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.502986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.503183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.503214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.503345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.503377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.503554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.503585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.503720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.503752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.503876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.503906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.504141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.504173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.504391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.504423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.504691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.504726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.504980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.505152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.505316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.505519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.505703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.505855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.505886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.506072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.506104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.506356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.506388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.506522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.506554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.506731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.506765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.506903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.506933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.507198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.507229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.507394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.507464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.604 [2024-10-14 17:48:12.507728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.604 [2024-10-14 17:48:12.507799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.604 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.508060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.508271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.508410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.508650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.508812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.508984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.509017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.509210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.509241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.509422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.509453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.509562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.509593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.509705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.509736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.509999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.510236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.510379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.510528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.510749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.510921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.510952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.511160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.511191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.511414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.511446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.511627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.511660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.511760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.511792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.512068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.512100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.512288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.512319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.512494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.512526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.512646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.512679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.512851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.512882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.513003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.513035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.513230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.513262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.513386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.513417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.513655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.513688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.513812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.513844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.514107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.514139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.514251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.514282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.514505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.514536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.514669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.514702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.514874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.514906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.515167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.515197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.515248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.605 [2024-10-14 17:48:12.515317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.515348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.515532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.515563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.515719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.515765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.605 [2024-10-14 17:48:12.515913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.605 [2024-10-14 17:48:12.515956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.605 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.516095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.516129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.516328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.516361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.516492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.516524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.516713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.516747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.517924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.517955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.518137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.518169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.518380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.518422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.518596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.518638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.518877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.518908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.519040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.519072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.519260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.519293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.519464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.519497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.519610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.519641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.519824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.519862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.520055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.520086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.520292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.520324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.520500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.520532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.520657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.520690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.520900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.520932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.521048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.521079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.521269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.521300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.521507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.521538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.521656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.521688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.521870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.521901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.522876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.522907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.523056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.523261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.523467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.523692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.606 [2024-10-14 17:48:12.523838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.606 qpair failed and we were unable to recover it. 00:31:13.606 [2024-10-14 17:48:12.523943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.523975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.524077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.524109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.524243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.524276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.524466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.524497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.524686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.524720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.524981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.525013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.525130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.525161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.525294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.525326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.525594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.525639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.525880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.525913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.526894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.526925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.527931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.527962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.528897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.528930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.529900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.529931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.530064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.530098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.607 qpair failed and we were unable to recover it. 00:31:13.607 [2024-10-14 17:48:12.530233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.607 [2024-10-14 17:48:12.530265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.530467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.530499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.530679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.530713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.530912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.530945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.531183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.531215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.531402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.531435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.531555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.531589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.531720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.531752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.531884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.531916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.532123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.532157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.532271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.532302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.532510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.532541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.532662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.532695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.532955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.532986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.533089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.533121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.533260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.533291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.533418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.533456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.533655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.533688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.533911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.533942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.534887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.534919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.535944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.535976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.536078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.536110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.536285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.536316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.536453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.536485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.536613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.536646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.536750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.536782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.537021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.537054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.537230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.537262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.537360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.537391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.537636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.537670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.608 [2024-10-14 17:48:12.537842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.608 [2024-10-14 17:48:12.537873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.608 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.538876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.538907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.539825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.539858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.540120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.540151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.540266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.540298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.540420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.540454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.540639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.540679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.540856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.540893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.541114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.541288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.541504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.541724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.541880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.541990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.542027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.542202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.542239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.542427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.542459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.542649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.542683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.542873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.542906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.543900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.543931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.544865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.544905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.545078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.609 [2024-10-14 17:48:12.545111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.609 qpair failed and we were unable to recover it. 00:31:13.609 [2024-10-14 17:48:12.545214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.545245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.545353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.545385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.545581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.545639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.545926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.545959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.546934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.546966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.547069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.547108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.547233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.547264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.547439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.547470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.547767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.547801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.548039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.548071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.548276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.548308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.548421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.548454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.548574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.548615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.548856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.548889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.549056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.549088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.549268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.549300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.549424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.549456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.549572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.549613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.549815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.549847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.550904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.550941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.551202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.551234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.551354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.551386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.551491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.551522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.551701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.551734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.552884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.610 [2024-10-14 17:48:12.552916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.610 qpair failed and we were unable to recover it. 00:31:13.610 [2024-10-14 17:48:12.553027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.553235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.553386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.553530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.553755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.553895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.553926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.554098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.554129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.554303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.554335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.554437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.554469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.554617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.554650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.554770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.554802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.555000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.555031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.555269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.555301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.555427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.555577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.555620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.555812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.555845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.556030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.556064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.556256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.556289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.556553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.556585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.556776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.556812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.556930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.556962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.557097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.557130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.557316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.557350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.557528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.557560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.557777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.557823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.557939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.557977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.558201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.558235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.558477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.558508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.558630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.558664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.558743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.611 [2024-10-14 17:48:12.558773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.611 [2024-10-14 17:48:12.558781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.611 [2024-10-14 17:48:12.558789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.611 [2024-10-14 17:48:12.558796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.611 [2024-10-14 17:48:12.558863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.558893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.559070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.559100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.559216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.559246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.559369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.559400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.559588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.559639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.559861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.559893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.560130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.560160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.560350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.611 [2024-10-14 17:48:12.560382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.611 qpair failed and we were unable to recover it. 00:31:13.611 [2024-10-14 17:48:12.560357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:13.612 [2024-10-14 17:48:12.560466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:13.612 [2024-10-14 17:48:12.560576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:13.612 [2024-10-14 17:48:12.560580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.560621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.560577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:13.612 [2024-10-14 17:48:12.560799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.560829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.561032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.561061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.561251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.561284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.561466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.561498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.561624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.561658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.561786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.561817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.562031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.562064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.562248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.562280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.562411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.562443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.562568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.562609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.562808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.562839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.563018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.563050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.563244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.563276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.563468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.563500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.563697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.563729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.563857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.563906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.564095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.564131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.564308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.564341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.564535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.564568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.564684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.564716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.564896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.564929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.565107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.565141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.565253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.565284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.565406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.565439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.565560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.565592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.565803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.565835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.566020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.566053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.566171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.566204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.566331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.566369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.566495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.566527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.566720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.566756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.567011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.567042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.567214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.567245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.567464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.567495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.567685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.567719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.567894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.612 [2024-10-14 17:48:12.567926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.612 qpair failed and we were unable to recover it. 00:31:13.612 [2024-10-14 17:48:12.568094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.568125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.568319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.568352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.568474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.568505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.568700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.568733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.568939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.568971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.569090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.569122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.569318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.569350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.569461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.569493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.569608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.569641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.569826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.569857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.570935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.570967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.571207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.571238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.571368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.571400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.571573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.571633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.571770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.571808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.571938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.571971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.572143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.572176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.572291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.572323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.572440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.572474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.572599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.572642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.572822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.572856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.573937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.573977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.574104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.574137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.574312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.574344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.574519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.574551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.574670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.574705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.574970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.575002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.575105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.575135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.575270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.613 [2024-10-14 17:48:12.575302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.613 qpair failed and we were unable to recover it. 00:31:13.613 [2024-10-14 17:48:12.575546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.575589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.575775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.575808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.575985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.576017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.576264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.576298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.576428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.576461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.576563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.576595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.576801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.576837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.577059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.577213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.577428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.577669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.577819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.577993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.578160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.578310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.578513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.578693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.578899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.578931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.579957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.579991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.580097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.580130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.580258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.580290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.580560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.580595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.580786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.580821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.580938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.580972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.581910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.581943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.582128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.582160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.582345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.582378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.582497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.582529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.582673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.582708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.582812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.582844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.614 qpair failed and we were unable to recover it. 00:31:13.614 [2024-10-14 17:48:12.583029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.614 [2024-10-14 17:48:12.583062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.583251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.583283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.583461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.583494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.583597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.583641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.583762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.583799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.583990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.584205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.584409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.584549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.584713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.584863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.584894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.585857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.585889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.586073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.586105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.586228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.586259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.586375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.586412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.586626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.586661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.586839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.586871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.587078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.587111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.587305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.587337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.587450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.587481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.587617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.587650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.587754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.587785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.588024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.588056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.588231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.588262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.588523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.588554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.588737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.588770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.588884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.588915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.589089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.589121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.589308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.589340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.589454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.589485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.589675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.589709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.589817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.589848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.590031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.590062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.590256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.590288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.590399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.590431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.590616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.615 [2024-10-14 17:48:12.590649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.615 qpair failed and we were unable to recover it. 00:31:13.615 [2024-10-14 17:48:12.590766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.590797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.591013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.591044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.591311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.591343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.591470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.591502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.591629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.591662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.591785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.591816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.592922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.592955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.593078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.593110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.593279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.593310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.593490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.593523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.593661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.593695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.593880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.593911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.594095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.594129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.594409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.594449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.594554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.594585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.594745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.594777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.594952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.594984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.595106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.595139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.595350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.595380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.595508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.595540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.595730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.595762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.595940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.595971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.596087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.596119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.596242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.596274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.596446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.596479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.596596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.596639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.596829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.596862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.597066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.597099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.597278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.597310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.597437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.597468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.616 [2024-10-14 17:48:12.597583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.616 [2024-10-14 17:48:12.597626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.616 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.597801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.597833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.597944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.597976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.598161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.598192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.598296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.598328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.598435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.598467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.598610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.598643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.598827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.598858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.599915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.599946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.600067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.600099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.600266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.600297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.600426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.600458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.600686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.600719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.600837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.600869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.601034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.601202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.601415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.601578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.601863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.601969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.602001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.602179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.602211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.602391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.602424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.602541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.602573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.602785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.602838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.603049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.603338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.603473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.603638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.603790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.603996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.604138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.604291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.604464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.604679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.604856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.604887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.605049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.617 [2024-10-14 17:48:12.605080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.617 qpair failed and we were unable to recover it. 00:31:13.617 [2024-10-14 17:48:12.605192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.605223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.605329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.605360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.605477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.605508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.605637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.605671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.605771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.605802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.606919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.606952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.607133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.607166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.607284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.607316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.607491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.607524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.607711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.607746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.607929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.607962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.608155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.608296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.608441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.608596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.608852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.608975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.609007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.609183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.609228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.609408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.609450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.609644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.609681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.609877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.609912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.610048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.610083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.610258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.610289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.610418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.610450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.610628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.610662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.610852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.610884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.611862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.611895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.612063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.612094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.612212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.612253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.612431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.618 [2024-10-14 17:48:12.612462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.618 qpair failed and we were unable to recover it. 00:31:13.618 [2024-10-14 17:48:12.612754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.612788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.612960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.612992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.613261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.613294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.613426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.613458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.613566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.613598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.613779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.613812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.613938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.613969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.614143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.614174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.614361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.614393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.614511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.614542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.614742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.614776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.614957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.614989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.615230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.615262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.615449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.615480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.615627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.615661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.615788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.615819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.615955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.615987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.616161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.616192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.616322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.616354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.616474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.616505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.616709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.616743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.616929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.616962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.617965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.617997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.618098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.618129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.618336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.618366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.618483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.618515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.618644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.618677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.618852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.618883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.619031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.619236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.619453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.619709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.619858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.619972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.620004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.620106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.620136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.619 [2024-10-14 17:48:12.620249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.619 [2024-10-14 17:48:12.620281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.619 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.620386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.620418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.620674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.620706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.620809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.620841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.620947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.620978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.621145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.621177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.621288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.621319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.621614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.621647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.621815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.621846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.621978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.622138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.622281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.622485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.622700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.622839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.622870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.623911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.623945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.624874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.624906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.625147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.625179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.625363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.625395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.625513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.625545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.625675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.625708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.625809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.625841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.626087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.626238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.626394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.626587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.620 [2024-10-14 17:48:12.626867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.620 qpair failed and we were unable to recover it. 00:31:13.620 [2024-10-14 17:48:12.626978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.627009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.627125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.627157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.627365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.627397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.627613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.627647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.627766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.627797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.627977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.628009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.628188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.628220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.628391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.628422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.628546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.628578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.628765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.628797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.629053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.629263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.629487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.629651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.629861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.629987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.630877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.630909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.631081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.631113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.631322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.631354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.631535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.631567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.631751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.631790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.631922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.631955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.632159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.632365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.632518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.632745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.632880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.632996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.633028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.633218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.633248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.633435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.633466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.633592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.633634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.633765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.633797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.633968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.634000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.634117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.634148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.634291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.621 [2024-10-14 17:48:12.634322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.621 qpair failed and we were unable to recover it. 00:31:13.621 [2024-10-14 17:48:12.634432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.634464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.634649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.634682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.634873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.634905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.635908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.635938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.636043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.636072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.636263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.636294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.636412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.636442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.636577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.636635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.636764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.636798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.637038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.637070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.637182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.637212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.637386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.637418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.637547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.637578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.637803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.637835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.638873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.638990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.639149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.639350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.639490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.639691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.639850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.639882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.640927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.640960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.641145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.641177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.641280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.641311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.641445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.641483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.641588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.622 [2024-10-14 17:48:12.641644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.622 qpair failed and we were unable to recover it. 00:31:13.622 [2024-10-14 17:48:12.641764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.641796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.641911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.641942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.642116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.642147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.642354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.642386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.642498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.642530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.642631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.642664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.642841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.642873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.643047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.643079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.643260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.643291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.643419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.643451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.643627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.643689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.643896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.643935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.644118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.644150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.644322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.644354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.644487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.644519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.644660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.644694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.644865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.644896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.645948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.645979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.646914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.646945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.647064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.647096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.647269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.647300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.647416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.647447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.647719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.647752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.647965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.647996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.648123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.648154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.648281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.648313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.648436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.648468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.648704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.648752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.648889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.648923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.623 qpair failed and we were unable to recover it. 00:31:13.623 [2024-10-14 17:48:12.649116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.623 [2024-10-14 17:48:12.649148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.649276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.649307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.649432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.649463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.649581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.649626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.649817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.649849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.649950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.649982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.650091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.650122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.650251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.650282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.650451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.650482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.650652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.650684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.650866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.650897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.651112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.651252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.651435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.651572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.651803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.651993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.652893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.652924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.653878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.653910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.654025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.654056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.654239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.654270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.654567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.654610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.654787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.654818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.655015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.655046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.655222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.655254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.655437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.655468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.624 qpair failed and we were unable to recover it. 00:31:13.624 [2024-10-14 17:48:12.655681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.624 [2024-10-14 17:48:12.655715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.655829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.655861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.655984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.656149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.656351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.656498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.656632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.656849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.656881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.657892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.657923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.658111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.658142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.658441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.658472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.658609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.658643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.658832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.658864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.659050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.659082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.659203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.659234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.659474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.659506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.659637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.659670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.659782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.659814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.660007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.660038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.660157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.660188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.660378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.660409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.660587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.660633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.660877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.660909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.661044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.661253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.661454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.661595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.661786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.661971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:13.625 [2024-10-14 17:48:12.662005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.662141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.662172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.662289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.662322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.662442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.662473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 [2024-10-14 17:48:12.662643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.662681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:13.625 [2024-10-14 17:48:12.662814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.662847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.625 qpair failed and we were unable to recover it. 00:31:13.625 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:13.625 [2024-10-14 17:48:12.663022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.625 [2024-10-14 17:48:12.663056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.663168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.663200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.626 [2024-10-14 17:48:12.663314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.663350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.663470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.663506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.663636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.663670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.663846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.663877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.664081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.664112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.664291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.664322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.664447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.664479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.664674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.664708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.664813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.664844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.665022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.665053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.665163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.665194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.665370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.665401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.665576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.665616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.665785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.665820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.666044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.666077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.666284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.666315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.666517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.666549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.666679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.666712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.666882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.666914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.667129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.667160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.667273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.667307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.667420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.667452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.667576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.667620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.667797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.667829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.668110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.668142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.668262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.668302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.668530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.668570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.668759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.668795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.668914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.668946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.669119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.669151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.669264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.669296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.669474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.669506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.669626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.669659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.669904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.669936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.670111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.670142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.670263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.670296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.670404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.670436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.670564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.670596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.626 [2024-10-14 17:48:12.670712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.626 [2024-10-14 17:48:12.670745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.626 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.670868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.670899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.671870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.671901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.672090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.672123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.672240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.672273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.672457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.672491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.672693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.672727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.672865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.672896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.673888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.673920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.674859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.674891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.675014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.675045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.675142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.675172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.675270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.675301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.675540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.675571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.675775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.675811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.676053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.676272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.676479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.676693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.676859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.676972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.677004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.677258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.677289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.677474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.677505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.677680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.677714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.677841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.677872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.677993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.627 [2024-10-14 17:48:12.678026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.627 qpair failed and we were unable to recover it. 00:31:13.627 [2024-10-14 17:48:12.678197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.678358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.678529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.678675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.678832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.678964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.678995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.679117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.679149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.679267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.679298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.679503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.679534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.679665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.679699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.679888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.679919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.680911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.680942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.681912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.681943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.682172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.682204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.682334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.682365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.682475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.682506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.682631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.682664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.682781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.682811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.683017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.683051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.683223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.683254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.683383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.683415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.683539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.683571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.683870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.683905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.684095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.684127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.684250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.684281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.684520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.684551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.684734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.684767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.684895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.684926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.685054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.685084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.685210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.685242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.628 [2024-10-14 17:48:12.685411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.628 [2024-10-14 17:48:12.685443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.628 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.685682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.685722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.685912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.685946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.686898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.686929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.687131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.687162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.687284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.687316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.687427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.687459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.687574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.687612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.687848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.687879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.688871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.688902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.689804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.689837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.690905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.690939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.691058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.691089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.691217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.629 [2024-10-14 17:48:12.691249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.629 qpair failed and we were unable to recover it. 00:31:13.629 [2024-10-14 17:48:12.691392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.691424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.691532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.691563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.691760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.691793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.691927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.691959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.692870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.692902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.693932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.693964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.694864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.694970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.695927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.695959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.696873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.696913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.697031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.697063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.697181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.697213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.697315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.697346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.630 [2024-10-14 17:48:12.697458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.697491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.630 [2024-10-14 17:48:12.697611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.630 [2024-10-14 17:48:12.697644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.630 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.697760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.697810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:13.631 [2024-10-14 17:48:12.697946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.697981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.698095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.631 [2024-10-14 17:48:12.698248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.698384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.631 [2024-10-14 17:48:12.698517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.698675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.698825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.698856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.698980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.699864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.699971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.700919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.700950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.701902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.701933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.631 [2024-10-14 17:48:12.702788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.631 qpair failed and we were unable to recover it. 00:31:13.631 [2024-10-14 17:48:12.702897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.702928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.703924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.703955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.704933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.704964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.705897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.705927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.706896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.706927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.707808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.707839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.708016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.708047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.708155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.708188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.708293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.708324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.708489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.898 [2024-10-14 17:48:12.708521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.898 qpair failed and we were unable to recover it. 00:31:13.898 [2024-10-14 17:48:12.708639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.708678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.708789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.708820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.708991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.709959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.709989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.710137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.710274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.710429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.710658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.710803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.710990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.711865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.711896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.712146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.712314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.712452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.712684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.712886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.712995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.713930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.713961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.714146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.714177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.714352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.714383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.714566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.714597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.714735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.714766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.714874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.714905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.715167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.715198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.899 qpair failed and we were unable to recover it. 00:31:13.899 [2024-10-14 17:48:12.715443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.899 [2024-10-14 17:48:12.715474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.715650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.715690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.715809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.715840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.715947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.715979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.716116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.716320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.716463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.716668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.716807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.716987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.717190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.717323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.717471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.717692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.717832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.717863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.718943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.718974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.719105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.719342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.719493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.719737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.719876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.719993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.720143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a14000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.720318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.720515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.720685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.720849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.720881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.721946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.721977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.722097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.722128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.722228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.722259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.900 [2024-10-14 17:48:12.722378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.900 [2024-10-14 17:48:12.722409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.900 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.722611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.722649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.722752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.722784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.722989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.723206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.723353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.723504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.723725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.723875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.723906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.724869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.724901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.725141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.725172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.725350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.725382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.725567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.725598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.725756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.725787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.725904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.725936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.726159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.726297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.726507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.726655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.726813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.726999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.727145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.727285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.727447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.727640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.727854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.727886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.728866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.728898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.729014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.729046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.729249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.729281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.729471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.729503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.729636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.901 [2024-10-14 17:48:12.729670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.901 qpair failed and we were unable to recover it. 00:31:13.901 [2024-10-14 17:48:12.729853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.729891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.730047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.730283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.730473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.730691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.730851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.730976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.731007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.731145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 Malloc0 00:31:13.902 [2024-10-14 17:48:12.731177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.731303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.731335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.731452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.731484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.731653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.731686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.731812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.731845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.902 [2024-10-14 17:48:12.732016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.732048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.732159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.732196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:13.902 [2024-10-14 17:48:12.732309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.732341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.732459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.732491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.732619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.902 [2024-10-14 17:48:12.732652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.732761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.732793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.902 [2024-10-14 17:48:12.732985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.733017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.733129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.733161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.733281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.733312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.733492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.733524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.733767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.733801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.733980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.734117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.734261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.734535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.734698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.734904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.734936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.735072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.735103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.735344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.735376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.735560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.735591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.902 [2024-10-14 17:48:12.735810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.902 [2024-10-14 17:48:12.735842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.902 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.735957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.735989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.736127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.736274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.736429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.736638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.736782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.736968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.737119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.737275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.737424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.737640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.737787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.737819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.903 [2024-10-14 17:48:12.738820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.738961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.738992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.739180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.739211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.739468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.739501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.739637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.739669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.739789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.739820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.739991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.740021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.740199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.740231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.740414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.740446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.740631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.740665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.740796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.740828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a20000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.741830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.741860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.742069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.742100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.742267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.742298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.742479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.742510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.742688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.742721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.742838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.742871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.903 [2024-10-14 17:48:12.743039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.903 [2024-10-14 17:48:12.743070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.903 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.743179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.743210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.743380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.743412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.743585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.743623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.743804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.743836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.743966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.743998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.744246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.744283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.744415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.744447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.744652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.744685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.744856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.744887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.744986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.745908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.745939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.746950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.746982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.747088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.747119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.747298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.747329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.904 [2024-10-14 17:48:12.747440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.747472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.747588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.747629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.747814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.904 [2024-10-14 17:48:12.747847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.747971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.748002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.904 [2024-10-14 17:48:12.748192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.748224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.748411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.904 [2024-10-14 17:48:12.748443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.748725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.748765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.748958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.748988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.749109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.749140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.749258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.749289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.749417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.749448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.749577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.749616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.749799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.749829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.750013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.750044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.904 qpair failed and we were unable to recover it. 00:31:13.904 [2024-10-14 17:48:12.750213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.904 [2024-10-14 17:48:12.750244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.750430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.750460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.750568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.750608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.750792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.750822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.751883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.751914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.752814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.752845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.753046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.753321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.753534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.753689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.753840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.753975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.754120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.754342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.754558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.754725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.754866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.754897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.755005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.755036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.755220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.755252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.755365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.755398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.905 [2024-10-14 17:48:12.755612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.755645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:13.905 [2024-10-14 17:48:12.755884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.755920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.756033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.905 [2024-10-14 17:48:12.756192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.756341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.905 [2024-10-14 17:48:12.756504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.756716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.756922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.756952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.757082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.905 [2024-10-14 17:48:12.757113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.905 qpair failed and we were unable to recover it. 00:31:13.905 [2024-10-14 17:48:12.757238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.757268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.757381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.757411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.757581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.757622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.757733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.757764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.757877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.757908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.758032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.758064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.758251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.758281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.758415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.758445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.758683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.758715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.758919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.758949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.759067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.759099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.759269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.759299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.759431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.759462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.759581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.759622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.759863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.759894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1a18000b90 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.760957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.760988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.761886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.761996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.762028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.762212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.762244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.762419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.762451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.762560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.762592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.762774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.762807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.762986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.763019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.763185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.763217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.763446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.763478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 wit 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.906 h addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.906 [2024-10-14 17:48:12.763616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.906 [2024-10-14 17:48:12.763650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.906 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.763841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.763874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.907 [2024-10-14 17:48:12.763993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.764024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.764141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.764173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.907 [2024-10-14 17:48:12.764347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.764379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.764482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.907 [2024-10-14 17:48:12.764515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.764686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.764719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.764839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.764871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.765049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.765087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.765258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.765290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.765399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.765431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.765608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.765640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.765905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.765937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.766141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.766174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.766433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.766464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.766589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.766630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.766751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.907 [2024-10-14 17:48:12.766782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2491c60 with addr=10.0.0.2, port=4420 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.767009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.907 [2024-10-14 17:48:12.769453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.769612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.769659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.769683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.769704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.769751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:13.907 [2024-10-14 17:48:12.779347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.779447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.779491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.779514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.779534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.907 [2024-10-14 17:48:12.779578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 17:48:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1267412 00:31:13.907 [2024-10-14 17:48:12.789319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.789396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.789422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.789436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.789449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.789476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.799363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.799429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.799449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.799458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.799466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.799485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.809317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.809374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.809389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.809396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.809401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.809416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.819356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.819414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.819430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.819437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.819443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.819457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.829347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.907 [2024-10-14 17:48:12.829401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.907 [2024-10-14 17:48:12.829416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.907 [2024-10-14 17:48:12.829422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.907 [2024-10-14 17:48:12.829428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.907 [2024-10-14 17:48:12.829442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.907 qpair failed and we were unable to recover it. 00:31:13.907 [2024-10-14 17:48:12.839330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.839386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.839400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.839406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.839412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.839426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.849373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.849432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.849449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.849456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.849462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.849477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.859440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.859490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.859505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.859514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.859520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.859534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.869487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.869547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.869562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.869569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.869575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.869589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.879507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.879566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.879581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.879587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.879593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.879612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.889549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.889608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.889623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.889629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.889635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.889650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.899558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.899643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.899658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.899665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.899671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.899686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.909597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.909659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.909673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.909679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.909685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.909700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.919553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.919612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.919628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.919635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.919641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.919655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.929654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.929707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.929722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.929728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.929734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.929748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.939677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.939732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.939746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.939752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.939758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.939771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.949653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.949749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.949763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.949772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.949778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.949792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.959780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.959835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.959849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.959855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.959861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.959875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.969779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.969833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.969847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.969854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.908 [2024-10-14 17:48:12.969860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.908 [2024-10-14 17:48:12.969874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.908 qpair failed and we were unable to recover it. 00:31:13.908 [2024-10-14 17:48:12.979824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.908 [2024-10-14 17:48:12.979882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.908 [2024-10-14 17:48:12.979896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.908 [2024-10-14 17:48:12.979903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:12.979908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:12.979922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:13.909 [2024-10-14 17:48:12.989836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.909 [2024-10-14 17:48:12.989887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.909 [2024-10-14 17:48:12.989901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.909 [2024-10-14 17:48:12.989907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:12.989913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:12.989927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:13.909 [2024-10-14 17:48:12.999887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.909 [2024-10-14 17:48:12.999948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.909 [2024-10-14 17:48:12.999963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.909 [2024-10-14 17:48:12.999970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:12.999976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:12.999989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:13.909 [2024-10-14 17:48:13.009941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.909 [2024-10-14 17:48:13.010003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.909 [2024-10-14 17:48:13.010017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.909 [2024-10-14 17:48:13.010023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:13.010029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:13.010043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:13.909 [2024-10-14 17:48:13.019858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.909 [2024-10-14 17:48:13.019911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.909 [2024-10-14 17:48:13.019925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.909 [2024-10-14 17:48:13.019931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:13.019937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:13.019951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:13.909 [2024-10-14 17:48:13.029936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.909 [2024-10-14 17:48:13.029984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.909 [2024-10-14 17:48:13.030001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.909 [2024-10-14 17:48:13.030008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.909 [2024-10-14 17:48:13.030014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:13.909 [2024-10-14 17:48:13.030029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.909 qpair failed and we were unable to recover it. 00:31:14.169 [2024-10-14 17:48:13.040027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.169 [2024-10-14 17:48:13.040127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.169 [2024-10-14 17:48:13.040144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.169 [2024-10-14 17:48:13.040154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.169 [2024-10-14 17:48:13.040160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.169 [2024-10-14 17:48:13.040175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.169 qpair failed and we were unable to recover it. 00:31:14.169 [2024-10-14 17:48:13.050005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.169 [2024-10-14 17:48:13.050059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.169 [2024-10-14 17:48:13.050074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.169 [2024-10-14 17:48:13.050080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.169 [2024-10-14 17:48:13.050086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.169 [2024-10-14 17:48:13.050101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.169 qpair failed and we were unable to recover it. 00:31:14.169 [2024-10-14 17:48:13.060032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.169 [2024-10-14 17:48:13.060092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.169 [2024-10-14 17:48:13.060109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.169 [2024-10-14 17:48:13.060116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.169 [2024-10-14 17:48:13.060122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.169 [2024-10-14 17:48:13.060138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.169 qpair failed and we were unable to recover it. 00:31:14.169 [2024-10-14 17:48:13.069985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.169 [2024-10-14 17:48:13.070037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.169 [2024-10-14 17:48:13.070052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.169 [2024-10-14 17:48:13.070059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.169 [2024-10-14 17:48:13.070065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.070079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.080104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.080162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.080178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.080185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.080192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.080206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.090142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.090197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.090211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.090218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.090224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.090238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.100076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.100127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.100141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.100148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.100154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.100168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.110175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.110231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.110244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.110251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.110257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.110271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.120208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.120262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.120276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.120283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.120289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.120303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.130251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.130304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.130318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.130328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.130334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.130348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.140255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.140306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.140320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.140326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.140332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.140346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.150302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.150352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.150365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.150372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.150378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.150392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.160322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.160376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.160390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.160397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.160403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.160416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.170378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.170433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.170447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.170454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.170460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.170473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.180411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.180468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.180481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.180488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.180494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.180507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.190405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.190459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.190472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.190479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.190485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.190499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.200408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.200479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.200492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.200498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.200504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.200518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.210483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.170 [2024-10-14 17:48:13.210573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.170 [2024-10-14 17:48:13.210586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.170 [2024-10-14 17:48:13.210593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.170 [2024-10-14 17:48:13.210598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.170 [2024-10-14 17:48:13.210623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.170 qpair failed and we were unable to recover it. 00:31:14.170 [2024-10-14 17:48:13.220480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.220532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.220549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.220556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.220562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.220575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.230513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.230566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.230579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.230587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.230593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.230610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.240542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.240594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.240612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.240618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.240624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.240638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.250578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.250637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.250651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.250657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.250663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.250677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.260597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.260657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.260671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.260677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.260683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.260697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.270619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.270673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.270687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.270693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.270699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.270713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.280661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.280719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.280732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.280739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.280745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.280759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.290670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.290725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.290739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.290746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.290751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.290765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.171 [2024-10-14 17:48:13.300689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.171 [2024-10-14 17:48:13.300763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.171 [2024-10-14 17:48:13.300776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.171 [2024-10-14 17:48:13.300783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.171 [2024-10-14 17:48:13.300789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.171 [2024-10-14 17:48:13.300803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.171 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.310739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.310790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.310810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.310817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.310823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.310838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.320803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.320857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.320874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.320881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.320887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.320902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.330792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.330872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.330887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.330893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.330899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.330913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.340833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.340891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.340905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.340912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.340918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.340932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.350844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.350899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.350912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.350919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.350925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.350944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.431 [2024-10-14 17:48:13.360898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.431 [2024-10-14 17:48:13.360970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.431 [2024-10-14 17:48:13.360984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.431 [2024-10-14 17:48:13.360991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.431 [2024-10-14 17:48:13.360996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.431 [2024-10-14 17:48:13.361010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.431 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.370908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.370964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.370977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.370983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.370989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.371003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.380875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.380928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.380942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.380949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.380955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.380968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.390959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.391015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.391028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.391035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.391041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.391055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.400995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.401059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.401075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.401082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.401088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.401101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.411017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.411070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.411084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.411090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.411096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.411110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.421050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.421101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.421115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.421121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.421127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.421141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.431104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.431157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.431171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.431177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.431183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.431197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.441108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.441162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.441175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.441181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.441188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.441204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.451146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.451203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.451217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.451224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.451229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.451243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.461216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.461272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.461285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.461292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.461298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.461311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.471181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.471232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.471246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.471253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.471258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.471272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.481207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.481265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.481279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.481286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.481292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.481305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.432 [2024-10-14 17:48:13.491241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.432 [2024-10-14 17:48:13.491291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.432 [2024-10-14 17:48:13.491307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.432 [2024-10-14 17:48:13.491314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.432 [2024-10-14 17:48:13.491320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.432 [2024-10-14 17:48:13.491334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.432 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.501265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.501321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.501335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.501341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.501347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.501361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.511294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.511381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.511395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.511402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.511407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.511421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.521327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.521378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.521393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.521399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.521405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.521420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.531351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.531406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.531420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.531426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.531432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.531449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.541358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.541424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.541439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.541445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.541451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.541465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.551412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.551482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.551496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.551502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.551508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.551522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.433 [2024-10-14 17:48:13.561449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.433 [2024-10-14 17:48:13.561507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.433 [2024-10-14 17:48:13.561522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.433 [2024-10-14 17:48:13.561528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.433 [2024-10-14 17:48:13.561534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.433 [2024-10-14 17:48:13.561548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.433 qpair failed and we were unable to recover it. 00:31:14.693 [2024-10-14 17:48:13.571607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.693 [2024-10-14 17:48:13.571670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.693 [2024-10-14 17:48:13.571687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.693 [2024-10-14 17:48:13.571694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.693 [2024-10-14 17:48:13.571700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.693 [2024-10-14 17:48:13.571715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.693 qpair failed and we were unable to recover it. 00:31:14.693 [2024-10-14 17:48:13.581545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.693 [2024-10-14 17:48:13.581604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.693 [2024-10-14 17:48:13.581623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.581630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.581636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.581651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.591570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.591630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.591645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.591651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.591657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.591672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.601598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.601671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.601685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.601692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.601697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.601712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.611595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.611655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.611670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.611677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.611682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.611697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.621645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.621706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.621720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.621727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.621736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.621751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.631652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.631706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.631720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.631727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.631733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.631747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.641719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.641775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.641789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.641796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.641801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.641815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.651758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.651815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.651829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.651835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.651841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.651855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.661751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.661805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.661818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.661825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.661831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.661844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.671780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.671830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.671847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.671853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.671859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.671873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.681809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.681864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.681878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.681884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.681890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.681903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.691856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.691916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.691930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.691936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.691942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.691956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.701894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.694 [2024-10-14 17:48:13.701958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.694 [2024-10-14 17:48:13.701972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.694 [2024-10-14 17:48:13.701978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.694 [2024-10-14 17:48:13.701984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.694 [2024-10-14 17:48:13.701999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.694 qpair failed and we were unable to recover it. 00:31:14.694 [2024-10-14 17:48:13.711943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.712006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.712020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.712027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.712035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.712049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.721936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.721989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.722003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.722009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.722015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.722029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.731934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.732004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.732017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.732024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.732030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.732044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.741977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.742030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.742044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.742050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.742056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.742070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.752007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.752064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.752077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.752084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.752090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.752104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.762045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.762103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.762117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.762123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.762129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.762143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.772065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.772120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.772134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.772140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.772146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.772160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.782093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.782141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.782155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.782161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.782167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.782180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.792171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.792231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.792245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.792251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.792257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.792270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.802159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.802240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.802254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.802261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.802269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.802283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.812136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.812223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.812237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.812243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.812249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.812263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.822203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.822260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.822273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.822280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.822286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.695 [2024-10-14 17:48:13.822299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.695 qpair failed and we were unable to recover it. 00:31:14.695 [2024-10-14 17:48:13.832229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.695 [2024-10-14 17:48:13.832300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.695 [2024-10-14 17:48:13.832317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.695 [2024-10-14 17:48:13.832325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.695 [2024-10-14 17:48:13.832331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.696 [2024-10-14 17:48:13.832347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.696 qpair failed and we were unable to recover it. 00:31:14.955 [2024-10-14 17:48:13.842261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.955 [2024-10-14 17:48:13.842313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.955 [2024-10-14 17:48:13.842329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.955 [2024-10-14 17:48:13.842336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.955 [2024-10-14 17:48:13.842342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.955 [2024-10-14 17:48:13.842357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.955 qpair failed and we were unable to recover it. 00:31:14.955 [2024-10-14 17:48:13.852295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.955 [2024-10-14 17:48:13.852353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.955 [2024-10-14 17:48:13.852368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.955 [2024-10-14 17:48:13.852375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.955 [2024-10-14 17:48:13.852381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.955 [2024-10-14 17:48:13.852396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.955 qpair failed and we were unable to recover it. 00:31:14.955 [2024-10-14 17:48:13.862336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.955 [2024-10-14 17:48:13.862419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.955 [2024-10-14 17:48:13.862434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.862440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.862446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.862460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.872342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.872397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.872412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.872419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.872425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.872439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.882384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.882442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.882457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.882463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.882469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.882483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.892338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.892393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.892408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.892414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.892423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.892437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.902424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.902478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.902495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.902504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.902511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.902525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.912459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.912511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.912526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.912532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.912538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.912553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.922494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.922552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.922567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.922573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.922579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.922594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.932518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.932573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.932587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.932593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.932603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.932618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.942466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.942520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.942535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.942543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.942550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.942564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.952585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.952643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.952657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.952664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.952670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.952684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.962616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.962716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.962730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.962737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.962742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.962757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.972560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.972617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.972631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.972638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.972644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.972659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.982676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.982741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.982754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.982764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.982769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.982784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:13.992689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:13.992742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:13.992756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.956 [2024-10-14 17:48:13.992762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.956 [2024-10-14 17:48:13.992768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.956 [2024-10-14 17:48:13.992782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.956 qpair failed and we were unable to recover it. 00:31:14.956 [2024-10-14 17:48:14.002714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.956 [2024-10-14 17:48:14.002768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.956 [2024-10-14 17:48:14.002782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.002788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.002794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.002808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.012795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.012847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.012861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.012868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.012874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.012888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.022799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.022866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.022880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.022886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.022892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.022906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.032763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.032819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.032833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.032840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.032845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.032860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.042773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.042830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.042844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.042851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.042857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.042870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.052799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.052855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.052869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.052875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.052881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.052895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.062889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.062942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.062956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.062962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.062968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.062982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.072907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.072959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.072974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.072984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.072989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.073003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.083011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.083113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.083127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.083134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.083140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.083154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:14.957 [2024-10-14 17:48:14.092928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.957 [2024-10-14 17:48:14.092982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.957 [2024-10-14 17:48:14.092998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.957 [2024-10-14 17:48:14.093006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.957 [2024-10-14 17:48:14.093012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:14.957 [2024-10-14 17:48:14.093026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.957 qpair failed and we were unable to recover it. 00:31:15.217 [2024-10-14 17:48:14.102998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.217 [2024-10-14 17:48:14.103051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.217 [2024-10-14 17:48:14.103068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.217 [2024-10-14 17:48:14.103075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.217 [2024-10-14 17:48:14.103080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.217 [2024-10-14 17:48:14.103096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-10-14 17:48:14.113039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.217 [2024-10-14 17:48:14.113091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.217 [2024-10-14 17:48:14.113106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.217 [2024-10-14 17:48:14.113112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.217 [2024-10-14 17:48:14.113118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.217 [2024-10-14 17:48:14.113132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-10-14 17:48:14.123093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.217 [2024-10-14 17:48:14.123149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.217 [2024-10-14 17:48:14.123164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.217 [2024-10-14 17:48:14.123171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.217 [2024-10-14 17:48:14.123177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.217 [2024-10-14 17:48:14.123191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-10-14 17:48:14.133030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.217 [2024-10-14 17:48:14.133085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.217 [2024-10-14 17:48:14.133099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.217 [2024-10-14 17:48:14.133105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.217 [2024-10-14 17:48:14.133111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.217 [2024-10-14 17:48:14.133125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.217 qpair failed and we were unable to recover it. 00:31:15.217 [2024-10-14 17:48:14.143162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.143219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.143233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.143239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.143245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.143259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.153081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.153134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.153148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.153154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.153160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.153174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.163115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.163175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.163188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.163200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.163206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.163220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.173231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.173295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.173309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.173315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.173321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.173336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.183164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.183218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.183233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.183240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.183245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.183259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.193236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.193288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.193302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.193310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.193316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.193330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.203344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.203401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.203415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.203421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.203427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.203441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.213327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.213396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.213410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.213416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.213422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.213436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.223357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.223411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.223425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.223432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.223437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.223451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.233386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.233438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.233451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.233458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.233463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.233478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.243328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.243384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.243397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.243404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.243410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.243424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.253442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.253495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.253510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.253520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.253525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.253539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.263450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.263500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.263514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.263521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.263527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.263540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.273489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.273538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.273551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.218 [2024-10-14 17:48:14.273558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.218 [2024-10-14 17:48:14.273563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.218 [2024-10-14 17:48:14.273577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.218 qpair failed and we were unable to recover it. 00:31:15.218 [2024-10-14 17:48:14.283522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.218 [2024-10-14 17:48:14.283576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.218 [2024-10-14 17:48:14.283590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.283597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.283608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.283622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.293512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.293586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.293605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.293613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.293619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.293633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.303577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.303648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.303662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.303669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.303674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.303689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.313631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.313691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.313706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.313712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.313718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.313732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.323638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.323692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.323706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.323713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.323718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.323732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.333658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.333712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.333726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.333733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.333738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.333752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.343691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.343740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.343757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.343763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.343769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.343783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.219 [2024-10-14 17:48:14.353688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.219 [2024-10-14 17:48:14.353742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.219 [2024-10-14 17:48:14.353757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.219 [2024-10-14 17:48:14.353764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.219 [2024-10-14 17:48:14.353770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.219 [2024-10-14 17:48:14.353785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.219 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.363774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.363850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.363867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.363874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.363880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.363896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.373776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.373828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.373843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.373849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.373855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.373870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.383800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.383853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.383867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.383874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.383880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.383894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.393827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.393886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.393900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.393907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.393912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.393927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.403869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.403927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.403941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.403948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.403954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.403968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.413829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.413884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.479 [2024-10-14 17:48:14.413898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.479 [2024-10-14 17:48:14.413904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.479 [2024-10-14 17:48:14.413911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.479 [2024-10-14 17:48:14.413925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.479 qpair failed and we were unable to recover it. 00:31:15.479 [2024-10-14 17:48:14.423907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.479 [2024-10-14 17:48:14.424003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.424017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.424024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.424030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.424043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.433938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.434004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.434021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.434027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.434033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.434047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.443954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.444041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.444055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.444062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.444067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.444081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.453966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.454023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.454036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.454043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.454049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.454063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.464016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.464073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.464086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.464093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.464098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.464112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.474045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.474098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.474112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.474118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.474124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.474138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.484081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.484132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.484146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.484152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.484158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.484172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.494107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.494156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.494170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.494176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.494182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.494196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.504057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.504110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.504123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.504130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.504136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.504149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.514164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.514216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.514230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.514237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.514243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.514257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.524200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.524258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.524275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.524282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.524288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.524302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.534216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.534321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.534335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.534341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.534347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.534360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.544243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.544297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.544310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.544317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.544323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.480 [2024-10-14 17:48:14.544336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.480 qpair failed and we were unable to recover it. 00:31:15.480 [2024-10-14 17:48:14.554329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.480 [2024-10-14 17:48:14.554412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.480 [2024-10-14 17:48:14.554425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.480 [2024-10-14 17:48:14.554432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.480 [2024-10-14 17:48:14.554437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.554451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.564302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.564370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.564384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.564391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.564396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.564414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.574365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.574423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.574437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.574443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.574449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.574463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.584344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.584402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.584417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.584424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.584431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.584445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.594299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.594370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.594384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.594391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.594397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.594410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.604411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.604464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.604477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.604483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.604489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.604502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.481 [2024-10-14 17:48:14.614432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.481 [2024-10-14 17:48:14.614487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.481 [2024-10-14 17:48:14.614506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.481 [2024-10-14 17:48:14.614513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.481 [2024-10-14 17:48:14.614519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.481 [2024-10-14 17:48:14.614533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.481 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.624470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.624525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.624542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.624549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.624555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.624570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.634502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.634583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.634598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.634608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.634613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.634627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.644503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.644557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.644570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.644577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.644583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.644597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.654551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.654607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.654621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.654627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.654633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.654651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.664573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.664631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.664645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.664652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.664658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.664672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.674603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.674659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.674673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.674680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.674685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.674699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.684573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.684635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.684649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.684656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.684661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.684675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.694667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.694724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.694737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.694743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.694749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.694763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.704696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.704749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.704766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.704772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.704778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.741 [2024-10-14 17:48:14.704792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.741 qpair failed and we were unable to recover it. 00:31:15.741 [2024-10-14 17:48:14.714699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.741 [2024-10-14 17:48:14.714749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.741 [2024-10-14 17:48:14.714763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.741 [2024-10-14 17:48:14.714769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.741 [2024-10-14 17:48:14.714775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.714789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.724773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.724825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.724839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.724846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.724851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.724866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.734823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.734893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.734906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.734912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.734918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.734931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.744821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.744871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.744885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.744891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.744898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.744914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.754901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.754957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.754971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.754978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.754984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.754998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.764884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.764950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.764965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.764971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.764977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.764990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.774907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.774961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.774975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.774982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.774987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.775001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.784931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.784986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.785000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.785006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.785012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.785026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.794956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.795008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.795024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.795031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.795037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.795051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.804986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.805041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.805054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.805061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.805067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.805081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.815020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.815078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.815092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.815099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.815105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.815118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.825047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.825115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.825128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.825135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.825141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.825155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.835066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.835119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.835132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.835139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.835150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.835165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.845160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.845216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.845233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.845240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.845246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.845261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.742 [2024-10-14 17:48:14.855126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.742 [2024-10-14 17:48:14.855182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.742 [2024-10-14 17:48:14.855196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.742 [2024-10-14 17:48:14.855202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.742 [2024-10-14 17:48:14.855209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.742 [2024-10-14 17:48:14.855223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.742 qpair failed and we were unable to recover it. 00:31:15.743 [2024-10-14 17:48:14.865086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.743 [2024-10-14 17:48:14.865140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.743 [2024-10-14 17:48:14.865153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.743 [2024-10-14 17:48:14.865160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.743 [2024-10-14 17:48:14.865166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.743 [2024-10-14 17:48:14.865179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.743 qpair failed and we were unable to recover it. 00:31:15.743 [2024-10-14 17:48:14.875188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.743 [2024-10-14 17:48:14.875257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.743 [2024-10-14 17:48:14.875271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.743 [2024-10-14 17:48:14.875277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.743 [2024-10-14 17:48:14.875283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:15.743 [2024-10-14 17:48:14.875298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.743 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.885217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.885279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.885303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.885314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.885320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.885336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.895268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.895332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.895347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.895354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.895360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.895374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.905321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.905375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.905389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.905396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.905402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.905416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.915345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.915396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.915410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.915417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.915423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.915437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.925363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.925465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.925479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.925485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.925494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.925508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.935356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.935411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.935425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.935432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.935438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.935452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.945389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.945436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.945450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.945457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.945463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.945478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.955343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.955395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.955409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.955415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.955421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.955435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.965499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.965604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.965618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.965625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.965631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.965645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.975478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.975539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.975553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.975559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.975565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.975579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.985443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.985536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.985550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.985556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.985562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.985575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:14.995537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:14.995611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:14.995641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:14.995648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:14.995654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:14.995670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:15.005569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:15.005625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.003 [2024-10-14 17:48:15.005639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.003 [2024-10-14 17:48:15.005646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.003 [2024-10-14 17:48:15.005652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.003 [2024-10-14 17:48:15.005665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.003 qpair failed and we were unable to recover it. 00:31:16.003 [2024-10-14 17:48:15.015590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.003 [2024-10-14 17:48:15.015686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.015701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.015707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.015716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.015730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.025656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.025716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.025729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.025736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.025741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.025755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.035718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.035770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.035785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.035791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.035797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.035812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.045624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.045681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.045695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.045702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.045708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.045722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.055631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.055686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.055699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.055706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.055712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.055726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.065729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.065792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.065806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.065813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.065819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.065832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.075753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.075803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.075817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.075824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.075829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.075843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.085792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.085846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.085860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.085866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.085872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.085886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.095817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.095900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.095914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.095920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.095926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.095939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.105854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.105904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.105918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.105924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.105933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.105947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.115887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.115938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.115951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.115958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.115964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.115978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.125903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.125961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.125974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.125981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.125987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.126000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.004 [2024-10-14 17:48:15.135945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.004 [2024-10-14 17:48:15.136050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.004 [2024-10-14 17:48:15.136064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.004 [2024-10-14 17:48:15.136070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.004 [2024-10-14 17:48:15.136076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.004 [2024-10-14 17:48:15.136090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.004 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.145892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.145943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.145960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.145966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.145972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.145987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.156035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.156094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.156109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.156116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.156122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.156136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.166028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.166086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.166100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.166107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.166113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.166127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.176040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.176098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.176112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.176119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.176124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.176139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.186076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.186149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.186163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.186169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.186175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.186189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.196032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.196084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.196098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.196108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.196114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.196128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.206168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.206227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.206240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.206247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.206253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.206267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.216162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.216217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.216231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.216238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.216243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.216257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.226191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.226240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.226254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.265 [2024-10-14 17:48:15.226260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.265 [2024-10-14 17:48:15.226266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.265 [2024-10-14 17:48:15.226281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.265 qpair failed and we were unable to recover it. 00:31:16.265 [2024-10-14 17:48:15.236208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.265 [2024-10-14 17:48:15.236284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.265 [2024-10-14 17:48:15.236297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.236304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.236310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.236324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.246249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.246302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.246316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.246322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.246328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.246342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.256268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.256322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.256336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.256342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.256348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.256362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.266299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.266354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.266367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.266374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.266380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.266393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.276317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.276367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.276381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.276387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.276393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.276407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.286335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.286387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.286400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.286410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.286416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.286430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.296364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.296451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.296466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.296472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.296478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.296492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.306437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.306501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.306515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.306522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.306528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.306541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.316480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.316532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.316547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.316553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.316559] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.316574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.326518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.326580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.326594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.326603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.326610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.326624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.336491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.336546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.336560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.336566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.336572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.336586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.346548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.346634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.346648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.346655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.346661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.346674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.356495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.356550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.356564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.356570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.356576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.356590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.366590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.366651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.366665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.366672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.366678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.266 [2024-10-14 17:48:15.366692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.266 qpair failed and we were unable to recover it. 00:31:16.266 [2024-10-14 17:48:15.376615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.266 [2024-10-14 17:48:15.376674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.266 [2024-10-14 17:48:15.376687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.266 [2024-10-14 17:48:15.376697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.266 [2024-10-14 17:48:15.376703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.267 [2024-10-14 17:48:15.376717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.267 qpair failed and we were unable to recover it. 00:31:16.267 [2024-10-14 17:48:15.386635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.267 [2024-10-14 17:48:15.386684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.267 [2024-10-14 17:48:15.386699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.267 [2024-10-14 17:48:15.386705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.267 [2024-10-14 17:48:15.386711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.267 [2024-10-14 17:48:15.386725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.267 qpair failed and we were unable to recover it. 00:31:16.267 [2024-10-14 17:48:15.396647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.267 [2024-10-14 17:48:15.396698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.267 [2024-10-14 17:48:15.396711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.267 [2024-10-14 17:48:15.396718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.267 [2024-10-14 17:48:15.396724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.267 [2024-10-14 17:48:15.396738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.267 qpair failed and we were unable to recover it. 00:31:16.526 [2024-10-14 17:48:15.406716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.526 [2024-10-14 17:48:15.406772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.526 [2024-10-14 17:48:15.406789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.526 [2024-10-14 17:48:15.406796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.526 [2024-10-14 17:48:15.406802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.526 [2024-10-14 17:48:15.406818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.526 qpair failed and we were unable to recover it. 00:31:16.526 [2024-10-14 17:48:15.416741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.526 [2024-10-14 17:48:15.416803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.526 [2024-10-14 17:48:15.416820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.526 [2024-10-14 17:48:15.416827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.526 [2024-10-14 17:48:15.416832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.526 [2024-10-14 17:48:15.416848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.526 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.426742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.426799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.426813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.426820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.426826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.426839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.436818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.436880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.436895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.436901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.436907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.436922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.446839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.446896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.446910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.446916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.446922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.446936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.456818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.456870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.456884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.456891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.456897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.456911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.466849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.466943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.466956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.466966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.466971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.466984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.476925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.476976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.476990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.476996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.477002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.477016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.486898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.486952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.486966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.486972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.486978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.486992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.496941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.496994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.497007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.497014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.497019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.497033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.506909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.506964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.506978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.506984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.506990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.507004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.517000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.517049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.517064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.517071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.517077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.517092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.527032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.527097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.527110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.527117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.527123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.527137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.536973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.537064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.537078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.537084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.537090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.537104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.547070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.547125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.547138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.547145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.547150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.547163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.557094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.527 [2024-10-14 17:48:15.557174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.527 [2024-10-14 17:48:15.557191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.527 [2024-10-14 17:48:15.557198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.527 [2024-10-14 17:48:15.557203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.527 [2024-10-14 17:48:15.557216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.527 qpair failed and we were unable to recover it. 00:31:16.527 [2024-10-14 17:48:15.567125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.567180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.567195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.567201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.567207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.567221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.577236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.577307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.577320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.577327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.577332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.577345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.587180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.587248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.587262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.587269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.587274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.587288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.597235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.597290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.597304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.597310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.597316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.597330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.607224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.607287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.607301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.607307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.607313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.607327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.617263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.617315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.617329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.617335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.617341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.617355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.627237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.627294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.627308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.627314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.627320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.627333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.637356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.637419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.637433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.637439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.637444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.637458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.647371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.647429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.647449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.647456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.647462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.647475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.528 [2024-10-14 17:48:15.657327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.528 [2024-10-14 17:48:15.657382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.528 [2024-10-14 17:48:15.657395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.528 [2024-10-14 17:48:15.657402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.528 [2024-10-14 17:48:15.657408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.528 [2024-10-14 17:48:15.657421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.528 qpair failed and we were unable to recover it. 00:31:16.788 [2024-10-14 17:48:15.667421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.788 [2024-10-14 17:48:15.667478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.788 [2024-10-14 17:48:15.667496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.788 [2024-10-14 17:48:15.667503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.788 [2024-10-14 17:48:15.667509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.788 [2024-10-14 17:48:15.667524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.788 qpair failed and we were unable to recover it. 00:31:16.788 [2024-10-14 17:48:15.677447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.788 [2024-10-14 17:48:15.677499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.788 [2024-10-14 17:48:15.677515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.677523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.677528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.677543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.687511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.687567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.687581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.687587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.687593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.687612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.697508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.697566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.697580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.697586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.697592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.697610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.707542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.707595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.707615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.707622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.707627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.707642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.717555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.717615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.717630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.717637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.717644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.717658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.727571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.727659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.727674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.727680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.727686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.727700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.737626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.737682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.737699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.737705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.737711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.737725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.747662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.747718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.747731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.747738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.747744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.747757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.757685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.757744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.757757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.757764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.757770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.757784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.767734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.767791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.767806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.767813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.767819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.767833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.777674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.777729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.777743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.777750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.777756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.777772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.787763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.787815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.787828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.787834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.787840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.787854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.797793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.797842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.797855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.797862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.797868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.797881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.807829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.807882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.807895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.807902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.807908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.789 [2024-10-14 17:48:15.807921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.789 qpair failed and we were unable to recover it. 00:31:16.789 [2024-10-14 17:48:15.817855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.789 [2024-10-14 17:48:15.817927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.789 [2024-10-14 17:48:15.817940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.789 [2024-10-14 17:48:15.817947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.789 [2024-10-14 17:48:15.817953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.817967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.827939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.827999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.828015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.828022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.828027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.828041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.837931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.837984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.837998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.838004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.838010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.838024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.847978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.848071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.848087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.848095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.848100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.848115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.857968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.858020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.858034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.858040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.858046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.858060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.868012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.868065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.868080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.868086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.868092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.868109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.878028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.878082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.878096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.878102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.878108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.878122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.888067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.888135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.888149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.888155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.888161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.888175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.898082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.898133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.898146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.898152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.898158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.898172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.908079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.908163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.908176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.908182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.908188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.908202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:16.790 [2024-10-14 17:48:15.918132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.790 [2024-10-14 17:48:15.918182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.790 [2024-10-14 17:48:15.918198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.790 [2024-10-14 17:48:15.918205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.790 [2024-10-14 17:48:15.918211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:16.790 [2024-10-14 17:48:15.918225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:16.790 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.928158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.928220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.928237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.928244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.928250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.928265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.938182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.938235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.938251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.938258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.938263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.938278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.948214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.948316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.948330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.948336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.948342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.948356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.958231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.958312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.958326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.958332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.958338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.958355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.968275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.968330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.968344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.968351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.968357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.968371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.978348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.978406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.978420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.978427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.978433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.978447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.988384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.988449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.988463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.988470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.988475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.988489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:15.998374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:15.998425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:15.998439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:15.998445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:15.998451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:15.998465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:16.008367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:16.008420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:16.008436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:16.008442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:16.008448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:16.008462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:16.018399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:16.018468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:16.018483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:16.018489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:16.018495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:16.018511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:16.028445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:16.028498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:16.028512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:16.028519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:16.028525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:16.028539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:16.038482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.051 [2024-10-14 17:48:16.038534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.051 [2024-10-14 17:48:16.038548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.051 [2024-10-14 17:48:16.038555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.051 [2024-10-14 17:48:16.038561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.051 [2024-10-14 17:48:16.038575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.051 qpair failed and we were unable to recover it. 00:31:17.051 [2024-10-14 17:48:16.048515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.048569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.048584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.048591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.048599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.048617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.058539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.058594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.058612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.058619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.058625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.058638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.068544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.068644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.068658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.068665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.068671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.068684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.078605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.078655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.078669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.078676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.078681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.078695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.088629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.088695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.088709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.088716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.088722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.088735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.098680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.098741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.098755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.098761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.098767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.098781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.108707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.108763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.108776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.108784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.108789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.108803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.118754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.118807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.118821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.118827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.118834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.118848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.128749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.128806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.128821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.128828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.128834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.128848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.138796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.138849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.138862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.138869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.138879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.138893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.148797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.148848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.148862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.148868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.148874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.148888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.158830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.158886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.158899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.158906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.158911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.158926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.168901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.052 [2024-10-14 17:48:16.168960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.052 [2024-10-14 17:48:16.168974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.052 [2024-10-14 17:48:16.168981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.052 [2024-10-14 17:48:16.168987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.052 [2024-10-14 17:48:16.169001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.052 qpair failed and we were unable to recover it. 00:31:17.052 [2024-10-14 17:48:16.178909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.053 [2024-10-14 17:48:16.178968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.053 [2024-10-14 17:48:16.178982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.053 [2024-10-14 17:48:16.178988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.053 [2024-10-14 17:48:16.178994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.053 [2024-10-14 17:48:16.179008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.053 qpair failed and we were unable to recover it. 00:31:17.053 [2024-10-14 17:48:16.188926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.053 [2024-10-14 17:48:16.188985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.053 [2024-10-14 17:48:16.189001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.053 [2024-10-14 17:48:16.189009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.053 [2024-10-14 17:48:16.189015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.053 [2024-10-14 17:48:16.189030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.053 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.198872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.198928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.198944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.198951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.198957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.198972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.208949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.209006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.209020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.209027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.209033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.209048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.219039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.219110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.219124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.219131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.219136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.219150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.229017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.229080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.229094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.229100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.229109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.229123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.239044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.239095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.239109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.239115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.239121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.239135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.249105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.249162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.249176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.249183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.249189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.249202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.259104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.259159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.259173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.259179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.259185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.259198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.269179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.269241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.269256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.269262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.269268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.269282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.279155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.279234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.279247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.279253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.279259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.279274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.289199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.289258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.289272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.289280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.313 [2024-10-14 17:48:16.289285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.313 [2024-10-14 17:48:16.289299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.313 qpair failed and we were unable to recover it. 00:31:17.313 [2024-10-14 17:48:16.299230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.313 [2024-10-14 17:48:16.299286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.313 [2024-10-14 17:48:16.299299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.313 [2024-10-14 17:48:16.299306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.299312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.299326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.309252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.309300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.309315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.309321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.309327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.309340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.319276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.319332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.319346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.319352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.319361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.319376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.329352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.329407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.329421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.329428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.329434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.329448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.339358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.339430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.339444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.339451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.339457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.339472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.349365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.349416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.349431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.349437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.349443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.349457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.359388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.359439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.359453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.359459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.359465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.359479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.369419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.369582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.369596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.369607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.369613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.369627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.379450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.379503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.379516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.379523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.379529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.379543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.389540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.389608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.389623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.389630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.389635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.389649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.399435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.399489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.399502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.399509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.399514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.399528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.409559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.409635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.409649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.409659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.409665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.409678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.419502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.419565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.419579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.419585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.419591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.419608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.429602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.429656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.429670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.429677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.429682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.429696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.314 qpair failed and we were unable to recover it. 00:31:17.314 [2024-10-14 17:48:16.439682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.314 [2024-10-14 17:48:16.439740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.314 [2024-10-14 17:48:16.439753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.314 [2024-10-14 17:48:16.439760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.314 [2024-10-14 17:48:16.439766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.314 [2024-10-14 17:48:16.439779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.315 qpair failed and we were unable to recover it. 00:31:17.315 [2024-10-14 17:48:16.449654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.315 [2024-10-14 17:48:16.449710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.315 [2024-10-14 17:48:16.449726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.315 [2024-10-14 17:48:16.449734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.315 [2024-10-14 17:48:16.449740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.315 [2024-10-14 17:48:16.449754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.315 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.459694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.459749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.459766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.459773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.459779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.459794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.469681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.469753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.469768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.469775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.469781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.469795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.479720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.479800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.479814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.479820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.479826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.479841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.489765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.489820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.489834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.489841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.489847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.489862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.499785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.499843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.499856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.499868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.499874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.499888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.509833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.509886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.509900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.509906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.509912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.509926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.519826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.519904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.519918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.519925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.519930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.519944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.529932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.529990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.530004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.530010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.530016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.530030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.539935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.539989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.540004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.540010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.540016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.540030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.549965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.550069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.550084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.575 [2024-10-14 17:48:16.550090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.575 [2024-10-14 17:48:16.550096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.575 [2024-10-14 17:48:16.550110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.575 qpair failed and we were unable to recover it. 00:31:17.575 [2024-10-14 17:48:16.559962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.575 [2024-10-14 17:48:16.560011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.575 [2024-10-14 17:48:16.560024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.560031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.560037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.560051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.570001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.570058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.570072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.570078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.570084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.570097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.580027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.580078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.580092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.580098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.580104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.580118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.590045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.590098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.590111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.590122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.590128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.590142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.600078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.600132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.600145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.600152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.600158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.600173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.610168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.610227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.610241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.610247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.610253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.610267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.620146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.620221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.620234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.620241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.620247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.620260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.630163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.630215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.630229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.630235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.630241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.630254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.640191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.640243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.640257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.640264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.640269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.640283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.650220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.650273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.650287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.650293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.650299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.650313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.660243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.660296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.660309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.660315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.660321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.660334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.670324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.670388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.670402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.670408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.670414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.670427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.680314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.680364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.680377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.680387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.680393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.680407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.690376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.690434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.690448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.690454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.690460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.690474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.576 qpair failed and we were unable to recover it. 00:31:17.576 [2024-10-14 17:48:16.700381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.576 [2024-10-14 17:48:16.700469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.576 [2024-10-14 17:48:16.700483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.576 [2024-10-14 17:48:16.700489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.576 [2024-10-14 17:48:16.700495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.576 [2024-10-14 17:48:16.700509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.577 qpair failed and we were unable to recover it. 00:31:17.577 [2024-10-14 17:48:16.710388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.577 [2024-10-14 17:48:16.710441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.577 [2024-10-14 17:48:16.710456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.577 [2024-10-14 17:48:16.710463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.577 [2024-10-14 17:48:16.710469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.577 [2024-10-14 17:48:16.710485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.577 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.720414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.720466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.720483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.720489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.720495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.720510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.730380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.730437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.730451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.730459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.730465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.730479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.740525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.740582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.740597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.740609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.740615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.740629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.750488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.750542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.750556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.750563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.750569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.750582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.760518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.760567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.760581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.760587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.760593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.760612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.770481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.770539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.770556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.770563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.770569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.770583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.780585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.780646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.780661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.780668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.780673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.780688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.790544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.790605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.790619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.790626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.790631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.790645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.800597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.800693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.800707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.837 [2024-10-14 17:48:16.800713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.837 [2024-10-14 17:48:16.800719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.837 [2024-10-14 17:48:16.800733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.837 qpair failed and we were unable to recover it. 00:31:17.837 [2024-10-14 17:48:16.810675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.837 [2024-10-14 17:48:16.810733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.837 [2024-10-14 17:48:16.810748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.810754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.810760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.810774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.820632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.820687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.820701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.820707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.820713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.820728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.830710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.830760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.830774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.830781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.830786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.830800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.840663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.840722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.840735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.840743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.840748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.840763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.850777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.850831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.850847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.850854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.850860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.850874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.860800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.860855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.860872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.860879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.860885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.860899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.870835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.870890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.870905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.870912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.870918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.870932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.880850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.880904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.880919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.880926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.880932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.880946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.890829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.890915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.890929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.890935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.890941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.890955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.900965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.901018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.901032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.901039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.901045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.901059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.910878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.910936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.910950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.910956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.910962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.910976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.920978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.921035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.921049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.921056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.921062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.921076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.931038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.931095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.931109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.931116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.931121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.931135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.940959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.941015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.941028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.941035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.941041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.838 [2024-10-14 17:48:16.941055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.838 qpair failed and we were unable to recover it. 00:31:17.838 [2024-10-14 17:48:16.950995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.838 [2024-10-14 17:48:16.951052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.838 [2024-10-14 17:48:16.951069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.838 [2024-10-14 17:48:16.951076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.838 [2024-10-14 17:48:16.951082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.839 [2024-10-14 17:48:16.951096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.839 qpair failed and we were unable to recover it. 00:31:17.839 [2024-10-14 17:48:16.961065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.839 [2024-10-14 17:48:16.961122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.839 [2024-10-14 17:48:16.961136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.839 [2024-10-14 17:48:16.961142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.839 [2024-10-14 17:48:16.961148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.839 [2024-10-14 17:48:16.961162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.839 qpair failed and we were unable to recover it. 00:31:17.839 [2024-10-14 17:48:16.971047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.839 [2024-10-14 17:48:16.971104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.839 [2024-10-14 17:48:16.971118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.839 [2024-10-14 17:48:16.971125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.839 [2024-10-14 17:48:16.971131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:17.839 [2024-10-14 17:48:16.971144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:17.839 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:16.981074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:16.981132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:16.981149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:16.981156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:16.981163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:16.981179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:16.991098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:16.991151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:16.991167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:16.991174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:16.991180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:16.991198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:17.001173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:17.001222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:17.001237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:17.001243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:17.001249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:17.001263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:17.011205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:17.011274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:17.011288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:17.011294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:17.011300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:17.011314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:17.021185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:17.021243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:17.021258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:17.021265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:17.021270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:17.021285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:17.031217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:17.031274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:17.031288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:17.031295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.099 [2024-10-14 17:48:17.031301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.099 [2024-10-14 17:48:17.031314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.099 qpair failed and we were unable to recover it. 00:31:18.099 [2024-10-14 17:48:17.041243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.099 [2024-10-14 17:48:17.041293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.099 [2024-10-14 17:48:17.041309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.099 [2024-10-14 17:48:17.041316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.041322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.041336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.051280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.051369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.051383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.051389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.051395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.051408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.061371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.061427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.061441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.061447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.061453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.061467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.071390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.071446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.071460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.071467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.071472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.071486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.081415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.081471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.081485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.081492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.081497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.081514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.091457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.091526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.091540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.091546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.091552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.091565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.101416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.101473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.101486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.101493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.101499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.101512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.111469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.111519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.111533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.111539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.111545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.111559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.121446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.121493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.121507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.121514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.121519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.121533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.131496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.131585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.131607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.131615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.131620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.131634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.141514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.141605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.141619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.141626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.141631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.141645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.151640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.151706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.151719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.151726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.151732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.151747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.161656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.161713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.161726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.161733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.161739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.161753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.171706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.171813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.171827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.171834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.171840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.171859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.100 [2024-10-14 17:48:17.181705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.100 [2024-10-14 17:48:17.181777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.100 [2024-10-14 17:48:17.181792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.100 [2024-10-14 17:48:17.181800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.100 [2024-10-14 17:48:17.181806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.100 [2024-10-14 17:48:17.181820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.100 qpair failed and we were unable to recover it. 00:31:18.101 [2024-10-14 17:48:17.191730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.101 [2024-10-14 17:48:17.191782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.101 [2024-10-14 17:48:17.191796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.101 [2024-10-14 17:48:17.191803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.101 [2024-10-14 17:48:17.191809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.101 [2024-10-14 17:48:17.191824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.101 qpair failed and we were unable to recover it. 00:31:18.101 [2024-10-14 17:48:17.201780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.101 [2024-10-14 17:48:17.201830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.101 [2024-10-14 17:48:17.201844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.101 [2024-10-14 17:48:17.201851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.101 [2024-10-14 17:48:17.201856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.101 [2024-10-14 17:48:17.201871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.101 qpair failed and we were unable to recover it. 00:31:18.101 [2024-10-14 17:48:17.211799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.101 [2024-10-14 17:48:17.211856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.101 [2024-10-14 17:48:17.211870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.101 [2024-10-14 17:48:17.211877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.101 [2024-10-14 17:48:17.211883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.101 [2024-10-14 17:48:17.211897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.101 qpair failed and we were unable to recover it. 00:31:18.101 [2024-10-14 17:48:17.221831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.101 [2024-10-14 17:48:17.221886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.101 [2024-10-14 17:48:17.221904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.101 [2024-10-14 17:48:17.221911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.101 [2024-10-14 17:48:17.221917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.101 [2024-10-14 17:48:17.221931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.101 qpair failed and we were unable to recover it. 00:31:18.101 [2024-10-14 17:48:17.231848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.101 [2024-10-14 17:48:17.231904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.101 [2024-10-14 17:48:17.231917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.101 [2024-10-14 17:48:17.231923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.101 [2024-10-14 17:48:17.231929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.101 [2024-10-14 17:48:17.231943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.101 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.241891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.241944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.241960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.241968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.241973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.241989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.251920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.251979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.251995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.252002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.252008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.252023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.261946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.261999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.262013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.262020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.262029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.262043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.271968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.272020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.272034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.272041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.272047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.272060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.282027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.282112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.282126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.282132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.282138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.282152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.292024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.292077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.292091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.292098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.292103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.292117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.302069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.302124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.302137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.302144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.302150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.361 [2024-10-14 17:48:17.302164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.361 qpair failed and we were unable to recover it. 00:31:18.361 [2024-10-14 17:48:17.312077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.361 [2024-10-14 17:48:17.312133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.361 [2024-10-14 17:48:17.312147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.361 [2024-10-14 17:48:17.312153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.361 [2024-10-14 17:48:17.312159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.312172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.322110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.322172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.322185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.322192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.322198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.322211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.332132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.332188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.332201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.332208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.332214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.332227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.342160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.342214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.342227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.342233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.342239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.342253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.352273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.352357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.352370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.352377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.352386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.352400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.362230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.362281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.362295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.362301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.362307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.362321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.372304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.372405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.372419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.372425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.372431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.372445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.382314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.382368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.382382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.382388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.382394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.382408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.392336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.392387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.392400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.392407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.392413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.392426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.402339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.402403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.402418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.402425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.402431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.402444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.412371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.412425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.412440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.412447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.412454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.412468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.422400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.422458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.422472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.422478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.422484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.422498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.432443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.432493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.432506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.432512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.432518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.432533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.442455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.442503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.442517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.442523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.442532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.442546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.362 qpair failed and we were unable to recover it. 00:31:18.362 [2024-10-14 17:48:17.452536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.362 [2024-10-14 17:48:17.452642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.362 [2024-10-14 17:48:17.452657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.362 [2024-10-14 17:48:17.452664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.362 [2024-10-14 17:48:17.452669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.362 [2024-10-14 17:48:17.452683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.363 qpair failed and we were unable to recover it. 00:31:18.363 [2024-10-14 17:48:17.462481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.363 [2024-10-14 17:48:17.462585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.363 [2024-10-14 17:48:17.462605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.363 [2024-10-14 17:48:17.462613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.363 [2024-10-14 17:48:17.462619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.363 [2024-10-14 17:48:17.462633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.363 qpair failed and we were unable to recover it. 00:31:18.363 [2024-10-14 17:48:17.472565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.363 [2024-10-14 17:48:17.472630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.363 [2024-10-14 17:48:17.472644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.363 [2024-10-14 17:48:17.472651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.363 [2024-10-14 17:48:17.472657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.363 [2024-10-14 17:48:17.472671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.363 qpair failed and we were unable to recover it. 00:31:18.363 [2024-10-14 17:48:17.482552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.363 [2024-10-14 17:48:17.482610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.363 [2024-10-14 17:48:17.482624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.363 [2024-10-14 17:48:17.482631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.363 [2024-10-14 17:48:17.482637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.363 [2024-10-14 17:48:17.482651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.363 qpair failed and we were unable to recover it. 00:31:18.363 [2024-10-14 17:48:17.492651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.363 [2024-10-14 17:48:17.492707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.363 [2024-10-14 17:48:17.492721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.363 [2024-10-14 17:48:17.492728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.363 [2024-10-14 17:48:17.492733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.363 [2024-10-14 17:48:17.492747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.363 qpair failed and we were unable to recover it. 00:31:18.622 [2024-10-14 17:48:17.502622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.502701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.502719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.502726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.502731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.502747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.512652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.512707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.512723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.512731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.512737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.512752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.522660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.522710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.522725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.522732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.522738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.522752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.532727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.532787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.532802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.532809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.532817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.532832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.542737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.542789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.542802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.542809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.542814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.542828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.552798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.552850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.552864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.552870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.552876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.552890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.562793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.562844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.562858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.562864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.562870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.562884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.572817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.572901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.572915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.572921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.572927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.572941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.582839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.582895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.582908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.582915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.582920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.582934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.592864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.592915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.592929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.592935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.592941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.592955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.602890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.602940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.602954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.602960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.602966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.602980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.612934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.612986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.612999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.613006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.613011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.623 [2024-10-14 17:48:17.613025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.623 qpair failed and we were unable to recover it. 00:31:18.623 [2024-10-14 17:48:17.622989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.623 [2024-10-14 17:48:17.623052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.623 [2024-10-14 17:48:17.623066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.623 [2024-10-14 17:48:17.623075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.623 [2024-10-14 17:48:17.623081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.623095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.632978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.633072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.633085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.633092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.633097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.633111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.643050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.643110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.643123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.643130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.643136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.643149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.652986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.653089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.653103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.653109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.653115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.653129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.662999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.663090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.663103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.663110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.663115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.663129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.673070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.673152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.673166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.673172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.673178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.673192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.683103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.683154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.683167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.683174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.683179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.683193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.693156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.693209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.693222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.693229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.693235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.693249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.703177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.703229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.703243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.703250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.703255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.703270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.713234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.713302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.713316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.713326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.713332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.713346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.723235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.723287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.723301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.723307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.723313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.723327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.733281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.733345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.733359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.733366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.733371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.733385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.743351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.743408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.743422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.743428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.743434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.743448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.624 [2024-10-14 17:48:17.753332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.624 [2024-10-14 17:48:17.753381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.624 [2024-10-14 17:48:17.753396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.624 [2024-10-14 17:48:17.753402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.624 [2024-10-14 17:48:17.753408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.624 [2024-10-14 17:48:17.753422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.624 qpair failed and we were unable to recover it. 00:31:18.903 [2024-10-14 17:48:17.763351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.763432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.763449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.763457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.763462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.763478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.773428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.773484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.773500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.773507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.773513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.773528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.783411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.783467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.783481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.783488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.783494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.783508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.793441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.793493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.793507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.793513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.793519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.793533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.803468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.803521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.803535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.803545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.803551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.803565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.813510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.813563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.813577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.813584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.813590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.813613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.823468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.823529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.823542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.823549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.823555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.823569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.833568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.833620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.833635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.833642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.833648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.833662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.843611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.843690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.843705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.843713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.843719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.843734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.853640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.853699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.853713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.853719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.853725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.853739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.863668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.863720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.863734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.863741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.863747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.863760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.873693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.904 [2024-10-14 17:48:17.873760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.904 [2024-10-14 17:48:17.873774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.904 [2024-10-14 17:48:17.873780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.904 [2024-10-14 17:48:17.873786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.904 [2024-10-14 17:48:17.873800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.904 qpair failed and we were unable to recover it. 00:31:18.904 [2024-10-14 17:48:17.883782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.883877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.883890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.883896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.883902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.883916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.893803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.893911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.893924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.893934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.893940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.893954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.903836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.903899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.903912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.903919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.903925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.903939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.913794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.913846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.913860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.913866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.913872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.913886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.923843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.923894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.923907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.923914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.923920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.923933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.933876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.933978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.933992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.933998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.934004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.934018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.943886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.943963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.943977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.943983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.943989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.944002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.953861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.953914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.953928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.953934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.953940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.953953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.963952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.964041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.964055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.964061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.964067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.964080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.973986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.974042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.974056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.974062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.974068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.974082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.984015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.984071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.984090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.984097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.984102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.984116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:17.994068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:17.994153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:17.994166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:17.994173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:17.994178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:17.994192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:18.004063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:18.004116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:18.004130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:18.004136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:18.004142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:18.004155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:18.014100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:18.014156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.905 [2024-10-14 17:48:18.014169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.905 [2024-10-14 17:48:18.014176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.905 [2024-10-14 17:48:18.014182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.905 [2024-10-14 17:48:18.014195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.905 qpair failed and we were unable to recover it. 00:31:18.905 [2024-10-14 17:48:18.024119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.905 [2024-10-14 17:48:18.024172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.906 [2024-10-14 17:48:18.024189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.906 [2024-10-14 17:48:18.024196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.906 [2024-10-14 17:48:18.024202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:18.906 [2024-10-14 17:48:18.024216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:18.906 qpair failed and we were unable to recover it. 00:31:19.205 [2024-10-14 17:48:18.034155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.205 [2024-10-14 17:48:18.034213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.205 [2024-10-14 17:48:18.034231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.205 [2024-10-14 17:48:18.034242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.205 [2024-10-14 17:48:18.034252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.205 [2024-10-14 17:48:18.034269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.205 qpair failed and we were unable to recover it. 00:31:19.205 [2024-10-14 17:48:18.044174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.205 [2024-10-14 17:48:18.044258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.205 [2024-10-14 17:48:18.044275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.205 [2024-10-14 17:48:18.044282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.205 [2024-10-14 17:48:18.044288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.044304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.054245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.054348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.054367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.054378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.054386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.054404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.064255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.064308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.064325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.064332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.064339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.064354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.074258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.074311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.074329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.074337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.074343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.074357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.084333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.084382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.084396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.084403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.084409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.084423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.094322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.094377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.094391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.094397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.094403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.094417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.104362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.104419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.104434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.104441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.104446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.104461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.114373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.114426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.114442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.114448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.114455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.114473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.124392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.124440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.124454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.124460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.124467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.124481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.134469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.134528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.134542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.134549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.134555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.134569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.144455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.144545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.144558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.144565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.144571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.144584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.154486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.154563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.154577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.154583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.154589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.154606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.164517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.164568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.164585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.164592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.164598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.164615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.174548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.206 [2024-10-14 17:48:18.174607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.206 [2024-10-14 17:48:18.174622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.206 [2024-10-14 17:48:18.174629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.206 [2024-10-14 17:48:18.174634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.206 [2024-10-14 17:48:18.174649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.206 qpair failed and we were unable to recover it. 00:31:19.206 [2024-10-14 17:48:18.184574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.184658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.184672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.184679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.184685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.184700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.194631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.194711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.194725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.194732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.194737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.194752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.204632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.204686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.204700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.204707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.204713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.204730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.214691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.214756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.214771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.214778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.214784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.214798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.224674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.224726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.224740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.224747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.224753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.224767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.234715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.234769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.234783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.234789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.234795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.234809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.244733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.244808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.244822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.244828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.244834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.244848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.254777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.254834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.254851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.254858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.254863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.254877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.264744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.264802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.264816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.264823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.264829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.264844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.274839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.274890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.274904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.274911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.274917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.274931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.284886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.284951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.284965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.284972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.284977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.284991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.294845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.294920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.294934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.294940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.294946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.294963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.304958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.305060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.305073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.305080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.305086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.305101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.314878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.314940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.314954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.207 [2024-10-14 17:48:18.314960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.207 [2024-10-14 17:48:18.314966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.207 [2024-10-14 17:48:18.314980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.207 qpair failed and we were unable to recover it. 00:31:19.207 [2024-10-14 17:48:18.325006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.207 [2024-10-14 17:48:18.325065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.207 [2024-10-14 17:48:18.325079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.208 [2024-10-14 17:48:18.325086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.208 [2024-10-14 17:48:18.325091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.208 [2024-10-14 17:48:18.325105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.208 qpair failed and we were unable to recover it. 00:31:19.208 [2024-10-14 17:48:18.335025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.208 [2024-10-14 17:48:18.335079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.208 [2024-10-14 17:48:18.335092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.208 [2024-10-14 17:48:18.335099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.208 [2024-10-14 17:48:18.335105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.208 [2024-10-14 17:48:18.335118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.208 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.344984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.345036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.345061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.345068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.345075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.345092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.355001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.355055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.355072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.355079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.355085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.355101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.365036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.365090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.365105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.365112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.365117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.365131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.375066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.375120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.375134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.375140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.375146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.375160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.385208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.385268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.385282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.385288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.385294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.385311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.395187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.395272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.395286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.395293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.484 [2024-10-14 17:48:18.395299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.484 [2024-10-14 17:48:18.395313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.484 qpair failed and we were unable to recover it. 00:31:19.484 [2024-10-14 17:48:18.405150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.484 [2024-10-14 17:48:18.405199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.484 [2024-10-14 17:48:18.405213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.484 [2024-10-14 17:48:18.405219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.405225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.405239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.415287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.415392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.415408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.415415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.415421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.415435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.425298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.425350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.425365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.425371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.425377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.425391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.435318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.435374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.435391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.435398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.435403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.435417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.445253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.445311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.445325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.445332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.445338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.445352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.455364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.455417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.455431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.455438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.455444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.455457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.465319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.465378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.465393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.465400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.465406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.465420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.475412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.475504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.475519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.475525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.475535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.475549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.485375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.485424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.485437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.485444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.485450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.485464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.495405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.495457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.495471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.495478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.495484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.495498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.505499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.505556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.505571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.505577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.505584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.505598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.515453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.515514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.515529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.515536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.515542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.515555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.525539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.525617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.525632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.525639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.525644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.525659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.535517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.535573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.535587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.535593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.535604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.485 [2024-10-14 17:48:18.535618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.485 qpair failed and we were unable to recover it. 00:31:19.485 [2024-10-14 17:48:18.545558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.485 [2024-10-14 17:48:18.545624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.485 [2024-10-14 17:48:18.545638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.485 [2024-10-14 17:48:18.545644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.485 [2024-10-14 17:48:18.545650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.545664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.555649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.555730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.555744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.555751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.555756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.555771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.565687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.565740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.565754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.565760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.565770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.565785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.575714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.575798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.575812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.575819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.575825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.575839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.585704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.585758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.585772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.585779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.585785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.585798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.595766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.595817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.595831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.595837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.595843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.595857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.605780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.605834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.605848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.605855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.605861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.605874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.486 [2024-10-14 17:48:18.615871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.486 [2024-10-14 17:48:18.615930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.486 [2024-10-14 17:48:18.615944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.486 [2024-10-14 17:48:18.615950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.486 [2024-10-14 17:48:18.615956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.486 [2024-10-14 17:48:18.615970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.486 qpair failed and we were unable to recover it. 00:31:19.746 [2024-10-14 17:48:18.625851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.746 [2024-10-14 17:48:18.625907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.746 [2024-10-14 17:48:18.625925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.746 [2024-10-14 17:48:18.625932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.746 [2024-10-14 17:48:18.625937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.746 [2024-10-14 17:48:18.625953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.746 qpair failed and we were unable to recover it. 00:31:19.746 [2024-10-14 17:48:18.635874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.746 [2024-10-14 17:48:18.635924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.746 [2024-10-14 17:48:18.635940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.746 [2024-10-14 17:48:18.635947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.746 [2024-10-14 17:48:18.635953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.746 [2024-10-14 17:48:18.635967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.746 qpair failed and we were unable to recover it. 00:31:19.746 [2024-10-14 17:48:18.645901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.746 [2024-10-14 17:48:18.645953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.746 [2024-10-14 17:48:18.645968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.746 [2024-10-14 17:48:18.645975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.746 [2024-10-14 17:48:18.645981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.746 [2024-10-14 17:48:18.645995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.746 qpair failed and we were unable to recover it. 00:31:19.746 [2024-10-14 17:48:18.655997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.746 [2024-10-14 17:48:18.656096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.746 [2024-10-14 17:48:18.656110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.746 [2024-10-14 17:48:18.656117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.746 [2024-10-14 17:48:18.656125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.746 [2024-10-14 17:48:18.656140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.746 qpair failed and we were unable to recover it. 00:31:19.746 [2024-10-14 17:48:18.665960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.746 [2024-10-14 17:48:18.666018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.746 [2024-10-14 17:48:18.666032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.746 [2024-10-14 17:48:18.666039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.746 [2024-10-14 17:48:18.666045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.746 [2024-10-14 17:48:18.666059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.676003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.676055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.676070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.676077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.676085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.747 [2024-10-14 17:48:18.676099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.685998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.686091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.686104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.686111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.686116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.747 [2024-10-14 17:48:18.686131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.696056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.696116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.696130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.696137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.696143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.747 [2024-10-14 17:48:18.696156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.706087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.706153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.706166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.706173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.706179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.747 [2024-10-14 17:48:18.706192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.716108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.716163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.716177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.716184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.716190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2491c60 00:31:19.747 [2024-10-14 17:48:18.716203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.726112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.726219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.726267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.726289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.726306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a18000b90 00:31:19.747 [2024-10-14 17:48:18.726350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.736110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.736180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.736205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.736217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.736228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a18000b90 00:31:19.747 [2024-10-14 17:48:18.736254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.746204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.746311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.746359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.746381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.746407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a14000b90 00:31:19.747 [2024-10-14 17:48:18.746449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.756188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.756269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.756308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.756320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.756331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a14000b90 00:31:19.747 [2024-10-14 17:48:18.756357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.756453] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:31:19.747 A controller has encountered a failure and is being reset. 00:31:19.747 [2024-10-14 17:48:18.766370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.766452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.766497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.766517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.766534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a20000b90 00:31:19.747 [2024-10-14 17:48:18.766578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.747 qpair failed and we were unable to recover it. 00:31:19.747 [2024-10-14 17:48:18.776262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.747 [2024-10-14 17:48:18.776367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.747 [2024-10-14 17:48:18.776401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.747 [2024-10-14 17:48:18.776418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.747 [2024-10-14 17:48:18.776433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1a20000b90 00:31:19.747 [2024-10-14 17:48:18.776467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.747 qpair failed and we were unable to recover it. 00:31:20.006 Controller properly reset. 00:31:20.006 Initializing NVMe Controllers 00:31:20.006 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:20.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:20.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:20.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:20.006 Initialization complete. Launching workers. 00:31:20.006 Starting thread on core 1 00:31:20.006 Starting thread on core 2 00:31:20.006 Starting thread on core 3 00:31:20.007 Starting thread on core 0 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:20.007 00:31:20.007 real 0m10.956s 00:31:20.007 user 0m19.261s 00:31:20.007 sys 0m4.671s 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:20.007 ************************************ 00:31:20.007 END TEST nvmf_target_disconnect_tc2 00:31:20.007 ************************************ 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.007 17:48:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.007 rmmod nvme_tcp 00:31:20.007 rmmod nvme_fabrics 00:31:20.007 rmmod nvme_keyring 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1268258 ']' 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1268258 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1268258 ']' 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1268258 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1268258 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1268258' 00:31:20.007 killing process with pid 1268258 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1268258 00:31:20.007 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1268258 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.266 17:48:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.804 00:31:22.804 real 0m19.720s 00:31:22.804 user 0m47.625s 00:31:22.804 sys 0m9.605s 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:22.804 ************************************ 00:31:22.804 END TEST nvmf_target_disconnect 00:31:22.804 ************************************ 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:22.804 00:31:22.804 real 5m51.375s 00:31:22.804 user 10m27.817s 00:31:22.804 sys 1m57.909s 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.804 17:48:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.804 ************************************ 00:31:22.804 END TEST nvmf_host 00:31:22.804 ************************************ 00:31:22.804 17:48:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:22.804 17:48:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:22.804 17:48:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:22.804 17:48:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:22.804 17:48:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.804 17:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.804 ************************************ 00:31:22.804 START TEST nvmf_target_core_interrupt_mode 00:31:22.804 ************************************ 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:22.804 * Looking for test storage... 00:31:22.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.804 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.805 --rc genhtml_branch_coverage=1 00:31:22.805 --rc genhtml_function_coverage=1 00:31:22.805 --rc genhtml_legend=1 00:31:22.805 --rc geninfo_all_blocks=1 00:31:22.805 --rc geninfo_unexecuted_blocks=1 00:31:22.805 00:31:22.805 ' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.805 --rc genhtml_branch_coverage=1 00:31:22.805 --rc genhtml_function_coverage=1 00:31:22.805 --rc genhtml_legend=1 00:31:22.805 --rc geninfo_all_blocks=1 00:31:22.805 --rc geninfo_unexecuted_blocks=1 00:31:22.805 00:31:22.805 ' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.805 --rc genhtml_branch_coverage=1 00:31:22.805 --rc genhtml_function_coverage=1 00:31:22.805 --rc genhtml_legend=1 00:31:22.805 --rc geninfo_all_blocks=1 00:31:22.805 --rc geninfo_unexecuted_blocks=1 00:31:22.805 00:31:22.805 ' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.805 --rc genhtml_branch_coverage=1 00:31:22.805 --rc genhtml_function_coverage=1 00:31:22.805 --rc genhtml_legend=1 00:31:22.805 --rc geninfo_all_blocks=1 00:31:22.805 --rc geninfo_unexecuted_blocks=1 00:31:22.805 00:31:22.805 ' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.805 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.805 ************************************ 00:31:22.805 START TEST nvmf_abort 00:31:22.806 ************************************ 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:22.806 * Looking for test storage... 00:31:22.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.806 --rc genhtml_branch_coverage=1 00:31:22.806 --rc genhtml_function_coverage=1 00:31:22.806 --rc genhtml_legend=1 00:31:22.806 --rc geninfo_all_blocks=1 00:31:22.806 --rc geninfo_unexecuted_blocks=1 00:31:22.806 00:31:22.806 ' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.806 --rc genhtml_branch_coverage=1 00:31:22.806 --rc genhtml_function_coverage=1 00:31:22.806 --rc genhtml_legend=1 00:31:22.806 --rc geninfo_all_blocks=1 00:31:22.806 --rc geninfo_unexecuted_blocks=1 00:31:22.806 00:31:22.806 ' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.806 --rc genhtml_branch_coverage=1 00:31:22.806 --rc genhtml_function_coverage=1 00:31:22.806 --rc genhtml_legend=1 00:31:22.806 --rc geninfo_all_blocks=1 00:31:22.806 --rc geninfo_unexecuted_blocks=1 00:31:22.806 00:31:22.806 ' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.806 --rc genhtml_branch_coverage=1 00:31:22.806 --rc genhtml_function_coverage=1 00:31:22.806 --rc genhtml_legend=1 00:31:22.806 --rc geninfo_all_blocks=1 00:31:22.806 --rc geninfo_unexecuted_blocks=1 00:31:22.806 00:31:22.806 ' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.806 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.807 17:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:29.375 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:29.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:29.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:29.376 Found net devices under 0000:86:00.0: cvl_0_0 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:29.376 Found net devices under 0000:86:00.1: cvl_0_1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:29.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:31:29.376 00:31:29.376 --- 10.0.0.2 ping statistics --- 00:31:29.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.376 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:29.376 00:31:29.376 --- 10.0.0.1 ping statistics --- 00:31:29.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.376 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:29.376 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1273009 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1273009 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1273009 ']' 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.377 17:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 [2024-10-14 17:48:27.870684] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:29.377 [2024-10-14 17:48:27.871565] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:31:29.377 [2024-10-14 17:48:27.871598] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.377 [2024-10-14 17:48:27.941801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:29.377 [2024-10-14 17:48:27.984610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.377 [2024-10-14 17:48:27.984643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.377 [2024-10-14 17:48:27.984651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.377 [2024-10-14 17:48:27.984658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.377 [2024-10-14 17:48:27.984663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.377 [2024-10-14 17:48:27.985921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.377 [2024-10-14 17:48:27.986027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.377 [2024-10-14 17:48:27.986028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.377 [2024-10-14 17:48:28.053812] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:29.377 [2024-10-14 17:48:28.054831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:29.377 [2024-10-14 17:48:28.055140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:29.377 [2024-10-14 17:48:28.055283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 [2024-10-14 17:48:28.130705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 Malloc0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 Delay0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 [2024-10-14 17:48:28.222731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.377 17:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:29.377 [2024-10-14 17:48:28.335302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:31.278 Initializing NVMe Controllers 00:31:31.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:31.278 controller IO queue size 128 less than required 00:31:31.278 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:31.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:31.278 Initialization complete. Launching workers. 00:31:31.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36904 00:31:31.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36961, failed to submit 66 00:31:31.278 success 36904, unsuccessful 57, failed 0 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:31.278 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:31.537 rmmod nvme_tcp 00:31:31.537 rmmod nvme_fabrics 00:31:31.537 rmmod nvme_keyring 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1273009 ']' 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1273009 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1273009 ']' 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1273009 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273009 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:31.537 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:31.538 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273009' 00:31:31.538 killing process with pid 1273009 00:31:31.538 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1273009 00:31:31.538 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1273009 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.797 17:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.705 00:31:33.705 real 0m11.102s 00:31:33.705 user 0m10.180s 00:31:33.705 sys 0m5.752s 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.705 ************************************ 00:31:33.705 END TEST nvmf_abort 00:31:33.705 ************************************ 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.705 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.965 ************************************ 00:31:33.966 START TEST nvmf_ns_hotplug_stress 00:31:33.966 ************************************ 00:31:33.966 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:33.966 * Looking for test storage... 00:31:33.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.966 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:33.966 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:31:33.966 17:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.966 --rc genhtml_branch_coverage=1 00:31:33.966 --rc genhtml_function_coverage=1 00:31:33.966 --rc genhtml_legend=1 00:31:33.966 --rc geninfo_all_blocks=1 00:31:33.966 --rc geninfo_unexecuted_blocks=1 00:31:33.966 00:31:33.966 ' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.966 --rc genhtml_branch_coverage=1 00:31:33.966 --rc genhtml_function_coverage=1 00:31:33.966 --rc genhtml_legend=1 00:31:33.966 --rc geninfo_all_blocks=1 00:31:33.966 --rc geninfo_unexecuted_blocks=1 00:31:33.966 00:31:33.966 ' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.966 --rc genhtml_branch_coverage=1 00:31:33.966 --rc genhtml_function_coverage=1 00:31:33.966 --rc genhtml_legend=1 00:31:33.966 --rc geninfo_all_blocks=1 00:31:33.966 --rc geninfo_unexecuted_blocks=1 00:31:33.966 00:31:33.966 ' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:33.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.966 --rc genhtml_branch_coverage=1 00:31:33.966 --rc genhtml_function_coverage=1 00:31:33.966 --rc genhtml_legend=1 00:31:33.966 --rc geninfo_all_blocks=1 00:31:33.966 --rc geninfo_unexecuted_blocks=1 00:31:33.966 00:31:33.966 ' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.966 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.967 17:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:40.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:40.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:40.540 Found net devices under 0000:86:00.0: cvl_0_0 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.540 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:40.541 Found net devices under 0000:86:00.1: cvl_0_1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:31:40.541 00:31:40.541 --- 10.0.0.2 ping statistics --- 00:31:40.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.541 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:31:40.541 00:31:40.541 --- 10.0.0.1 ping statistics --- 00:31:40.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.541 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1277010 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1277010 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1277010 ']' 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.541 17:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 [2024-10-14 17:48:39.019176] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.541 [2024-10-14 17:48:39.020068] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:31:40.541 [2024-10-14 17:48:39.020105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.541 [2024-10-14 17:48:39.090269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.541 [2024-10-14 17:48:39.132727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.541 [2024-10-14 17:48:39.132762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.541 [2024-10-14 17:48:39.132770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.541 [2024-10-14 17:48:39.132777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.541 [2024-10-14 17:48:39.132783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.541 [2024-10-14 17:48:39.134101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.541 [2024-10-14 17:48:39.134210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.541 [2024-10-14 17:48:39.134211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.541 [2024-10-14 17:48:39.201467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.541 [2024-10-14 17:48:39.202551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.541 [2024-10-14 17:48:39.202844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.541 [2024-10-14 17:48:39.202980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:40.541 [2024-10-14 17:48:39.435096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.541 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:40.542 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.801 [2024-10-14 17:48:39.831545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.801 17:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:41.060 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:41.319 Malloc0 00:31:41.319 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:41.577 Delay0 00:31:41.577 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.836 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:41.836 NULL1 00:31:41.836 17:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:42.094 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1277283 00:31:42.094 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:42.094 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:42.094 17:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.467 Read completed with error (sct=0, sc=11) 00:31:43.467 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:43.467 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:43.467 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:43.726 true 00:31:43.726 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:43.726 17:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.660 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.660 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:44.660 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:44.917 true 00:31:44.917 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:44.917 17:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.175 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.175 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:45.175 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:45.433 true 00:31:45.433 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:45.433 17:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:46.806 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:46.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:46.806 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:46.806 17:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:47.075 true 00:31:47.075 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:47.075 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.332 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.332 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:47.332 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:47.590 true 00:31:47.590 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:47.590 17:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 17:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:48.965 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:48.965 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:49.222 true 00:31:49.222 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:49.222 17:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.156 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.156 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:50.156 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:50.414 true 00:31:50.414 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:50.414 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.672 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.930 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:50.930 17:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:50.930 true 00:31:50.930 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:50.930 17:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:52.304 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:52.304 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:52.563 true 00:31:52.563 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:52.563 17:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.498 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.498 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:53.498 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:53.756 true 00:31:53.756 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:53.756 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.014 17:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.014 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:54.014 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:54.272 true 00:31:54.272 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:54.272 17:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:55.646 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:55.646 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:55.904 true 00:31:55.904 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:55.904 17:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.838 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.838 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:56.838 17:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:57.098 true 00:31:57.098 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:57.098 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.356 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.615 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:57.615 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:57.615 true 00:31:57.615 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:57.615 17:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:58.990 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:58.990 17:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:59.247 true 00:31:59.247 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:31:59.248 17:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.182 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.182 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:00.182 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:00.439 true 00:32:00.439 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:00.440 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.697 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.697 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:00.698 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:00.956 true 00:32:00.956 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:00.956 17:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:02.330 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:02.330 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:02.588 true 00:32:02.588 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:02.588 17:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:03.521 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.521 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:03.521 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:03.777 true 00:32:03.777 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:03.777 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.035 17:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.035 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:04.035 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:04.293 true 00:32:04.293 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:04.293 17:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:05.665 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:05.665 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:05.923 true 00:32:05.923 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:05.923 17:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.856 17:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.856 17:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:06.856 17:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:07.114 true 00:32:07.114 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:07.114 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.372 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.630 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:07.630 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:07.630 true 00:32:07.630 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:07.630 17:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 17:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:09.004 17:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:09.004 17:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:09.266 true 00:32:09.266 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:09.266 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:10.090 17:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.090 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:10.090 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:10.348 true 00:32:10.348 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:10.348 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.606 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.871 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:10.871 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:10.871 true 00:32:10.871 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:10.871 17:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.252 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:12.252 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:12.511 true 00:32:12.511 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:12.511 17:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.447 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.447 Initializing NVMe Controllers 00:32:13.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.447 Controller IO queue size 128, less than required. 00:32:13.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.447 Controller IO queue size 128, less than required. 00:32:13.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:13.447 Initialization complete. Launching workers. 00:32:13.447 ======================================================== 00:32:13.447 Latency(us) 00:32:13.447 Device Information : IOPS MiB/s Average min max 00:32:13.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2254.83 1.10 41469.59 1868.63 1020966.79 00:32:13.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18501.77 9.03 6917.94 1569.47 369754.43 00:32:13.447 ======================================================== 00:32:13.447 Total : 20756.60 10.14 10671.36 1569.47 1020966.79 00:32:13.447 00:32:13.447 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:13.447 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:13.706 true 00:32:13.706 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1277283 00:32:13.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1277283) - No such process 00:32:13.706 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1277283 00:32:13.706 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.964 17:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:14.223 null0 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.223 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:14.483 null1 00:32:14.483 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.483 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.483 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:14.742 null2 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:14.742 null3 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.742 17:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:15.001 null4 00:32:15.001 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.001 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.001 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:15.260 null5 00:32:15.260 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.260 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.260 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:15.260 null6 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:15.524 null7 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:15.524 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1282737 1282740 1282743 1282746 1282749 1282752 1282754 1282756 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.525 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:15.822 17:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.103 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.378 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.379 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.641 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.900 17:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.900 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.160 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.161 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.419 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.678 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.678 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.679 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.939 17:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.939 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.198 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.199 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.458 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.717 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.976 17:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.976 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.976 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.234 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:19.493 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.753 rmmod nvme_tcp 00:32:19.753 rmmod nvme_fabrics 00:32:19.753 rmmod nvme_keyring 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1277010 ']' 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1277010 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1277010 ']' 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1277010 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1277010 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1277010' 00:32:19.753 killing process with pid 1277010 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1277010 00:32:19.753 17:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1277010 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.013 17:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.685 00:32:22.685 real 0m48.237s 00:32:22.685 user 3m0.067s 00:32:22.685 sys 0m20.002s 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:22.685 ************************************ 00:32:22.685 END TEST nvmf_ns_hotplug_stress 00:32:22.685 ************************************ 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.685 ************************************ 00:32:22.685 START TEST nvmf_delete_subsystem 00:32:22.685 ************************************ 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:22.685 * Looking for test storage... 00:32:22.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.685 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:22.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.686 --rc genhtml_branch_coverage=1 00:32:22.686 --rc genhtml_function_coverage=1 00:32:22.686 --rc genhtml_legend=1 00:32:22.686 --rc geninfo_all_blocks=1 00:32:22.686 --rc geninfo_unexecuted_blocks=1 00:32:22.686 00:32:22.686 ' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:22.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.686 --rc genhtml_branch_coverage=1 00:32:22.686 --rc genhtml_function_coverage=1 00:32:22.686 --rc genhtml_legend=1 00:32:22.686 --rc geninfo_all_blocks=1 00:32:22.686 --rc geninfo_unexecuted_blocks=1 00:32:22.686 00:32:22.686 ' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:22.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.686 --rc genhtml_branch_coverage=1 00:32:22.686 --rc genhtml_function_coverage=1 00:32:22.686 --rc genhtml_legend=1 00:32:22.686 --rc geninfo_all_blocks=1 00:32:22.686 --rc geninfo_unexecuted_blocks=1 00:32:22.686 00:32:22.686 ' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:22.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.686 --rc genhtml_branch_coverage=1 00:32:22.686 --rc genhtml_function_coverage=1 00:32:22.686 --rc genhtml_legend=1 00:32:22.686 --rc geninfo_all_blocks=1 00:32:22.686 --rc geninfo_unexecuted_blocks=1 00:32:22.686 00:32:22.686 ' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.686 17:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:27.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:27.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.965 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:27.966 Found net devices under 0000:86:00.0: cvl_0_0 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:27.966 Found net devices under 0000:86:00.1: cvl_0_1 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.966 17:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.966 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.966 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.966 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:32:28.226 00:32:28.226 --- 10.0.0.2 ping statistics --- 00:32:28.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.226 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:32:28.226 00:32:28.226 --- 10.0.0.1 ping statistics --- 00:32:28.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.226 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:28.226 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1287006 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1287006 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1287006 ']' 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.227 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.227 [2024-10-14 17:49:27.331524] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:28.227 [2024-10-14 17:49:27.332458] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:32:28.227 [2024-10-14 17:49:27.332492] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.487 [2024-10-14 17:49:27.403741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:28.487 [2024-10-14 17:49:27.445783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.487 [2024-10-14 17:49:27.445816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.487 [2024-10-14 17:49:27.445825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.487 [2024-10-14 17:49:27.445831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.487 [2024-10-14 17:49:27.445836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.487 [2024-10-14 17:49:27.447041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.487 [2024-10-14 17:49:27.447041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.487 [2024-10-14 17:49:27.514557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:28.487 [2024-10-14 17:49:27.515269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:28.487 [2024-10-14 17:49:27.515432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.487 [2024-10-14 17:49:27.595744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.487 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.488 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.488 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.488 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.488 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.488 [2024-10-14 17:49:27.624083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.748 NULL1 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.748 Delay0 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1287165 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:28.748 17:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:28.748 [2024-10-14 17:49:27.729902] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:30.654 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.654 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.654 17:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:30.914 Write completed with error (sct=0, sc=8) 00:32:30.914 Write completed with error (sct=0, sc=8) 00:32:30.914 Read completed with error (sct=0, sc=8) 00:32:30.914 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 [2024-10-14 17:49:29.809274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5608000c10 is same with the state(6) to be set 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 starting I/O failed: -6 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 [2024-10-14 17:49:29.809852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965750 is same with the state(6) to be set 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.915 Write completed with error (sct=0, sc=8) 00:32:30.915 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 Read completed with error (sct=0, sc=8) 00:32:30.916 Write completed with error (sct=0, sc=8) 00:32:30.916 [2024-10-14 17:49:29.810040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f560800cff0 is same with the state(6) to be set 00:32:31.857 [2024-10-14 17:49:30.785709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966a70 is same with the state(6) to be set 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 [2024-10-14 17:49:30.810234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f560800d320 is same with the state(6) to be set 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 [2024-10-14 17:49:30.811442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965390 is same with the state(6) to be set 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 [2024-10-14 17:49:30.811619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965930 is same with the state(6) to be set 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Write completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.857 Read completed with error (sct=0, sc=8) 00:32:31.858 Read completed with error (sct=0, sc=8) 00:32:31.858 [2024-10-14 17:49:30.812348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965570 is same with the state(6) to be set 00:32:31.858 Initializing NVMe Controllers 00:32:31.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:31.858 Controller IO queue size 128, less than required. 00:32:31.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:31.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:31.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:31.858 Initialization complete. Launching workers. 00:32:31.858 ======================================================== 00:32:31.858 Latency(us) 00:32:31.858 Device Information : IOPS MiB/s Average min max 00:32:31.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.63 0.10 943148.23 1317.86 1010091.49 00:32:31.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.94 0.07 890673.79 394.84 1010553.36 00:32:31.858 ======================================================== 00:32:31.858 Total : 347.57 0.17 920209.40 394.84 1010553.36 00:32:31.858 00:32:31.858 [2024-10-14 17:49:30.813037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x966a70 (9): Bad file descriptor 00:32:31.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:31.858 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.858 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:31.858 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1287165 00:32:31.858 17:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1287165 00:32:32.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1287165) - No such process 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1287165 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1287165 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1287165 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.426 [2024-10-14 17:49:31.343975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1287716 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:32.426 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:32.426 [2024-10-14 17:49:31.416377] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:33.006 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:33.006 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:33.006 17:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:33.266 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:33.266 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:33.266 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:33.835 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:33.835 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:33.835 17:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:34.403 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:34.403 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:34.403 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:34.972 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:34.972 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:34.972 17:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:35.545 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:35.545 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:35.545 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:35.545 Initializing NVMe Controllers 00:32:35.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.545 Controller IO queue size 128, less than required. 00:32:35.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:35.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:35.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:35.545 Initialization complete. Launching workers. 00:32:35.545 ======================================================== 00:32:35.545 Latency(us) 00:32:35.545 Device Information : IOPS MiB/s Average min max 00:32:35.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002319.96 1000155.30 1006417.37 00:32:35.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004190.58 1000306.16 1010649.89 00:32:35.545 ======================================================== 00:32:35.545 Total : 256.00 0.12 1003255.27 1000155.30 1010649.89 00:32:35.545 00:32:35.804 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:35.804 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1287716 00:32:35.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1287716) - No such process 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1287716 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.805 rmmod nvme_tcp 00:32:35.805 rmmod nvme_fabrics 00:32:35.805 rmmod nvme_keyring 00:32:35.805 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1287006 ']' 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1287006 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1287006 ']' 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1287006 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.064 17:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287006 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287006' 00:32:36.064 killing process with pid 1287006 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1287006 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1287006 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.064 17:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.604 00:32:38.604 real 0m16.064s 00:32:38.604 user 0m25.783s 00:32:38.604 sys 0m6.131s 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:38.604 ************************************ 00:32:38.604 END TEST nvmf_delete_subsystem 00:32:38.604 ************************************ 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:38.604 ************************************ 00:32:38.604 START TEST nvmf_host_management 00:32:38.604 ************************************ 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:38.604 * Looking for test storage... 00:32:38.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.604 --rc genhtml_branch_coverage=1 00:32:38.604 --rc genhtml_function_coverage=1 00:32:38.604 --rc genhtml_legend=1 00:32:38.604 --rc geninfo_all_blocks=1 00:32:38.604 --rc geninfo_unexecuted_blocks=1 00:32:38.604 00:32:38.604 ' 00:32:38.604 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:38.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.604 --rc genhtml_branch_coverage=1 00:32:38.604 --rc genhtml_function_coverage=1 00:32:38.605 --rc genhtml_legend=1 00:32:38.605 --rc geninfo_all_blocks=1 00:32:38.605 --rc geninfo_unexecuted_blocks=1 00:32:38.605 00:32:38.605 ' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:38.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.605 --rc genhtml_branch_coverage=1 00:32:38.605 --rc genhtml_function_coverage=1 00:32:38.605 --rc genhtml_legend=1 00:32:38.605 --rc geninfo_all_blocks=1 00:32:38.605 --rc geninfo_unexecuted_blocks=1 00:32:38.605 00:32:38.605 ' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:38.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.605 --rc genhtml_branch_coverage=1 00:32:38.605 --rc genhtml_function_coverage=1 00:32:38.605 --rc genhtml_legend=1 00:32:38.605 --rc geninfo_all_blocks=1 00:32:38.605 --rc geninfo_unexecuted_blocks=1 00:32:38.605 00:32:38.605 ' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.605 17:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:45.179 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:45.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:45.179 Found net devices under 0000:86:00.0: cvl_0_0 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:45.179 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:45.180 Found net devices under 0000:86:00.1: cvl_0_1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:32:45.180 00:32:45.180 --- 10.0.0.2 ping statistics --- 00:32:45.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.180 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:32:45.180 00:32:45.180 --- 10.0.0.1 ping statistics --- 00:32:45.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.180 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1291734 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1291734 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1291734 ']' 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.180 [2024-10-14 17:49:43.485364] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.180 [2024-10-14 17:49:43.486380] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:32:45.180 [2024-10-14 17:49:43.486419] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.180 [2024-10-14 17:49:43.558163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.180 [2024-10-14 17:49:43.600697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.180 [2024-10-14 17:49:43.600736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.180 [2024-10-14 17:49:43.600743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.180 [2024-10-14 17:49:43.600749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.180 [2024-10-14 17:49:43.600756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.180 [2024-10-14 17:49:43.602246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.180 [2024-10-14 17:49:43.602354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.180 [2024-10-14 17:49:43.602460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.180 [2024-10-14 17:49:43.602461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:45.180 [2024-10-14 17:49:43.670630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.180 [2024-10-14 17:49:43.671310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.180 [2024-10-14 17:49:43.671811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.180 [2024-10-14 17:49:43.672371] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.180 [2024-10-14 17:49:43.672399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:45.180 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 [2024-10-14 17:49:43.747158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 Malloc0 00:32:45.181 [2024-10-14 17:49:43.831411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1291971 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1291971 /var/tmp/bdevperf.sock 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1291971 ']' 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:45.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:45.181 { 00:32:45.181 "params": { 00:32:45.181 "name": "Nvme$subsystem", 00:32:45.181 "trtype": "$TEST_TRANSPORT", 00:32:45.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.181 "adrfam": "ipv4", 00:32:45.181 "trsvcid": "$NVMF_PORT", 00:32:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.181 "hdgst": ${hdgst:-false}, 00:32:45.181 "ddgst": ${ddgst:-false} 00:32:45.181 }, 00:32:45.181 "method": "bdev_nvme_attach_controller" 00:32:45.181 } 00:32:45.181 EOF 00:32:45.181 )") 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:32:45.181 17:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:45.181 "params": { 00:32:45.181 "name": "Nvme0", 00:32:45.181 "trtype": "tcp", 00:32:45.181 "traddr": "10.0.0.2", 00:32:45.181 "adrfam": "ipv4", 00:32:45.181 "trsvcid": "4420", 00:32:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.181 "hdgst": false, 00:32:45.181 "ddgst": false 00:32:45.181 }, 00:32:45.181 "method": "bdev_nvme_attach_controller" 00:32:45.181 }' 00:32:45.181 [2024-10-14 17:49:43.928547] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:32:45.181 [2024-10-14 17:49:43.928597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291971 ] 00:32:45.181 [2024-10-14 17:49:43.996762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.181 [2024-10-14 17:49:44.037590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.181 Running I/O for 10 seconds... 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:32:45.441 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.704 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.704 [2024-10-14 17:49:44.715089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.704 [2024-10-14 17:49:44.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.704 [2024-10-14 17:49:44.715140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.704 [2024-10-14 17:49:44.715147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.704 [2024-10-14 17:49:44.715155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.704 [2024-10-14 17:49:44.715161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.704 [2024-10-14 17:49:44.715168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.704 [2024-10-14 17:49:44.715175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.704 [2024-10-14 17:49:44.715182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bb5c0 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.704 [2024-10-14 17:49:44.717556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762f80 is same with the state(6) to be set 00:32:45.705 [2024-10-14 17:49:44.717885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.717925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.717940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.717959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.717974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.717988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.717994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.705 [2024-10-14 17:49:44.718437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.705 [2024-10-14 17:49:44.718444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.706 [2024-10-14 17:49:44.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.718845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad4850 is same with the state(6) to be set 00:32:45.706 [2024-10-14 17:49:44.718894] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xad4850 was disconnected and freed. reset controller. 00:32:45.706 [2024-10-14 17:49:44.719811] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:45.706 task offset: 98304 on job bdev=Nvme0n1 fails 00:32:45.706 00:32:45.706 Latency(us) 00:32:45.706 [2024-10-14T15:49:44.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.706 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:45.706 Job: Nvme0n1 ended in about 0.41 seconds with error 00:32:45.706 Verification LBA range: start 0x0 length 0x400 00:32:45.706 Nvme0n1 : 0.41 1894.14 118.38 157.85 0.00 30366.35 3620.08 26588.89 00:32:45.706 [2024-10-14T15:49:44.844Z] =================================================================================================================== 00:32:45.706 [2024-10-14T15:49:44.844Z] Total : 1894.14 118.38 157.85 0.00 30366.35 3620.08 26588.89 00:32:45.706 [2024-10-14 17:49:44.722154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:45.706 [2024-10-14 17:49:44.722175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bb5c0 (9): Bad file descriptor 00:32:45.706 [2024-10-14 17:49:44.723146] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:45.706 [2024-10-14 17:49:44.723214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:45.706 [2024-10-14 17:49:44.723236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.706 [2024-10-14 17:49:44.723247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:45.706 [2024-10-14 17:49:44.723255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:45.706 [2024-10-14 17:49:44.723262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:45.706 [2024-10-14 17:49:44.723268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8bb5c0 00:32:45.706 [2024-10-14 17:49:44.723289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bb5c0 (9): Bad file descriptor 00:32:45.706 [2024-10-14 17:49:44.723300] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.706 [2024-10-14 17:49:44.723307] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.706 [2024-10-14 17:49:44.723315] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.706 [2024-10-14 17:49:44.723327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.706 17:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1291971 00:32:46.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1291971) - No such process 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:46.646 { 00:32:46.646 "params": { 00:32:46.646 "name": "Nvme$subsystem", 00:32:46.646 "trtype": "$TEST_TRANSPORT", 00:32:46.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:46.646 "adrfam": "ipv4", 00:32:46.646 "trsvcid": "$NVMF_PORT", 00:32:46.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:46.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:46.646 "hdgst": ${hdgst:-false}, 00:32:46.646 "ddgst": ${ddgst:-false} 00:32:46.646 }, 00:32:46.646 "method": "bdev_nvme_attach_controller" 00:32:46.646 } 00:32:46.646 EOF 00:32:46.646 )") 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:32:46.646 17:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:46.646 "params": { 00:32:46.646 "name": "Nvme0", 00:32:46.646 "trtype": "tcp", 00:32:46.646 "traddr": "10.0.0.2", 00:32:46.646 "adrfam": "ipv4", 00:32:46.646 "trsvcid": "4420", 00:32:46.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.646 "hdgst": false, 00:32:46.646 "ddgst": false 00:32:46.646 }, 00:32:46.646 "method": "bdev_nvme_attach_controller" 00:32:46.646 }' 00:32:46.646 [2024-10-14 17:49:45.786420] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:32:46.905 [2024-10-14 17:49:45.786468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292217 ] 00:32:46.905 [2024-10-14 17:49:45.852447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.905 [2024-10-14 17:49:45.890726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.164 Running I/O for 1 seconds... 00:32:48.103 1984.00 IOPS, 124.00 MiB/s 00:32:48.103 Latency(us) 00:32:48.103 [2024-10-14T15:49:47.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.103 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:48.103 Verification LBA range: start 0x0 length 0x400 00:32:48.103 Nvme0n1 : 1.00 2040.18 127.51 0.00 0.00 30880.36 6397.56 26339.23 00:32:48.103 [2024-10-14T15:49:47.241Z] =================================================================================================================== 00:32:48.103 [2024-10-14T15:49:47.241Z] Total : 2040.18 127.51 0.00 0.00 30880.36 6397.56 26339.23 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.365 rmmod nvme_tcp 00:32:48.365 rmmod nvme_fabrics 00:32:48.365 rmmod nvme_keyring 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1291734 ']' 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1291734 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1291734 ']' 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1291734 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291734 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291734' 00:32:48.365 killing process with pid 1291734 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1291734 00:32:48.365 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1291734 00:32:48.624 [2024-10-14 17:49:47.613202] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.624 17:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:51.162 00:32:51.162 real 0m12.396s 00:32:51.162 user 0m18.319s 00:32:51.162 sys 0m6.210s 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.162 ************************************ 00:32:51.162 END TEST nvmf_host_management 00:32:51.162 ************************************ 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:51.162 ************************************ 00:32:51.162 START TEST nvmf_lvol 00:32:51.162 ************************************ 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:51.162 * Looking for test storage... 00:32:51.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.162 --rc genhtml_branch_coverage=1 00:32:51.162 --rc genhtml_function_coverage=1 00:32:51.162 --rc genhtml_legend=1 00:32:51.162 --rc geninfo_all_blocks=1 00:32:51.162 --rc geninfo_unexecuted_blocks=1 00:32:51.162 00:32:51.162 ' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.162 --rc genhtml_branch_coverage=1 00:32:51.162 --rc genhtml_function_coverage=1 00:32:51.162 --rc genhtml_legend=1 00:32:51.162 --rc geninfo_all_blocks=1 00:32:51.162 --rc geninfo_unexecuted_blocks=1 00:32:51.162 00:32:51.162 ' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.162 --rc genhtml_branch_coverage=1 00:32:51.162 --rc genhtml_function_coverage=1 00:32:51.162 --rc genhtml_legend=1 00:32:51.162 --rc geninfo_all_blocks=1 00:32:51.162 --rc geninfo_unexecuted_blocks=1 00:32:51.162 00:32:51.162 ' 00:32:51.162 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.162 --rc genhtml_branch_coverage=1 00:32:51.162 --rc genhtml_function_coverage=1 00:32:51.162 --rc genhtml_legend=1 00:32:51.162 --rc geninfo_all_blocks=1 00:32:51.162 --rc geninfo_unexecuted_blocks=1 00:32:51.162 00:32:51.162 ' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.163 17:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.163 17:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:51.163 17:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:51.163 17:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.163 17:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.441 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:56.701 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:56.701 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:56.701 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:56.702 Found net devices under 0000:86:00.0: cvl_0_0 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:56.702 Found net devices under 0000:86:00.1: cvl_0_1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:32:56.702 00:32:56.702 --- 10.0.0.2 ping statistics --- 00:32:56.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.702 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:32:56.702 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:32:56.962 00:32:56.962 --- 10.0.0.1 ping statistics --- 00:32:56.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.962 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1295981 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1295981 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1295981 ']' 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:56.962 17:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:56.962 [2024-10-14 17:49:55.940248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:56.962 [2024-10-14 17:49:55.941229] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:32:56.962 [2024-10-14 17:49:55.941269] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.962 [2024-10-14 17:49:56.012793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:56.962 [2024-10-14 17:49:56.055105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.962 [2024-10-14 17:49:56.055140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.962 [2024-10-14 17:49:56.055148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.962 [2024-10-14 17:49:56.055154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.962 [2024-10-14 17:49:56.055159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.962 [2024-10-14 17:49:56.056403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.962 [2024-10-14 17:49:56.056436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.962 [2024-10-14 17:49:56.056436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.221 [2024-10-14 17:49:56.123695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.221 [2024-10-14 17:49:56.124687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:57.221 [2024-10-14 17:49:56.124993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.221 [2024-10-14 17:49:56.125140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.221 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:57.221 [2024-10-14 17:49:56.361270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.481 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:57.481 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:57.481 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:57.740 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:57.740 17:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:57.999 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:58.258 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=38e3e7a4-8672-4fc2-be25-c34c8f6a6af4 00:32:58.258 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 38e3e7a4-8672-4fc2-be25-c34c8f6a6af4 lvol 20 00:32:58.517 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1af36f38-a99c-46e1-a25f-a6bff28d02d5 00:32:58.517 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:58.517 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1af36f38-a99c-46e1-a25f-a6bff28d02d5 00:32:58.776 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:59.035 [2024-10-14 17:49:57.929170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.035 17:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:59.035 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1296350 00:32:59.035 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:59.035 17:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:00.412 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1af36f38-a99c-46e1-a25f-a6bff28d02d5 MY_SNAPSHOT 00:33:00.412 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=440167c0-d997-40be-bca4-d4043586084f 00:33:00.412 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1af36f38-a99c-46e1-a25f-a6bff28d02d5 30 00:33:00.671 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 440167c0-d997-40be-bca4-d4043586084f MY_CLONE 00:33:00.929 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2b0f8376-d766-48e1-9334-2b113f559d62 00:33:00.929 17:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2b0f8376-d766-48e1-9334-2b113f559d62 00:33:01.498 17:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1296350 00:33:09.626 Initializing NVMe Controllers 00:33:09.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:09.626 Controller IO queue size 128, less than required. 00:33:09.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:09.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:09.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:09.626 Initialization complete. Launching workers. 00:33:09.626 ======================================================== 00:33:09.626 Latency(us) 00:33:09.626 Device Information : IOPS MiB/s Average min max 00:33:09.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12329.60 48.16 10381.88 1721.04 72911.30 00:33:09.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12011.00 46.92 10659.32 1840.64 63702.32 00:33:09.626 ======================================================== 00:33:09.626 Total : 24340.60 95.08 10518.78 1721.04 72911.30 00:33:09.626 00:33:09.626 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.626 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1af36f38-a99c-46e1-a25f-a6bff28d02d5 00:33:09.906 17:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38e3e7a4-8672-4fc2-be25-c34c8f6a6af4 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.207 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.207 rmmod nvme_tcp 00:33:10.208 rmmod nvme_fabrics 00:33:10.208 rmmod nvme_keyring 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1295981 ']' 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1295981 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1295981 ']' 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1295981 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1295981 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1295981' 00:33:10.208 killing process with pid 1295981 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1295981 00:33:10.208 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1295981 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.467 17:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.373 00:33:12.373 real 0m21.672s 00:33:12.373 user 0m55.416s 00:33:12.373 sys 0m9.690s 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 ************************************ 00:33:12.373 END TEST nvmf_lvol 00:33:12.373 ************************************ 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:12.373 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.633 ************************************ 00:33:12.633 START TEST nvmf_lvs_grow 00:33:12.633 ************************************ 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:12.633 * Looking for test storage... 00:33:12.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.633 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.633 --rc genhtml_branch_coverage=1 00:33:12.633 --rc genhtml_function_coverage=1 00:33:12.633 --rc genhtml_legend=1 00:33:12.634 --rc geninfo_all_blocks=1 00:33:12.634 --rc geninfo_unexecuted_blocks=1 00:33:12.634 00:33:12.634 ' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.634 --rc genhtml_branch_coverage=1 00:33:12.634 --rc genhtml_function_coverage=1 00:33:12.634 --rc genhtml_legend=1 00:33:12.634 --rc geninfo_all_blocks=1 00:33:12.634 --rc geninfo_unexecuted_blocks=1 00:33:12.634 00:33:12.634 ' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.634 --rc genhtml_branch_coverage=1 00:33:12.634 --rc genhtml_function_coverage=1 00:33:12.634 --rc genhtml_legend=1 00:33:12.634 --rc geninfo_all_blocks=1 00:33:12.634 --rc geninfo_unexecuted_blocks=1 00:33:12.634 00:33:12.634 ' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.634 --rc genhtml_branch_coverage=1 00:33:12.634 --rc genhtml_function_coverage=1 00:33:12.634 --rc genhtml_legend=1 00:33:12.634 --rc geninfo_all_blocks=1 00:33:12.634 --rc geninfo_unexecuted_blocks=1 00:33:12.634 00:33:12.634 ' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.634 17:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:19.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:19.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:19.209 Found net devices under 0000:86:00.0: cvl_0_0 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:19.209 Found net devices under 0000:86:00.1: cvl_0_1 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:19.209 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:19.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:19.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:33:19.210 00:33:19.210 --- 10.0.0.2 ping statistics --- 00:33:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.210 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:19.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:19.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:33:19.210 00:33:19.210 --- 10.0.0.1 ping statistics --- 00:33:19.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:19.210 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1301604 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1301604 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1301604 ']' 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:19.210 [2024-10-14 17:50:17.718001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:19.210 [2024-10-14 17:50:17.718907] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:33:19.210 [2024-10-14 17:50:17.718940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.210 [2024-10-14 17:50:17.791528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.210 [2024-10-14 17:50:17.833279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.210 [2024-10-14 17:50:17.833317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.210 [2024-10-14 17:50:17.833325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.210 [2024-10-14 17:50:17.833331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.210 [2024-10-14 17:50:17.833336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.210 [2024-10-14 17:50:17.833842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.210 [2024-10-14 17:50:17.900729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:19.210 [2024-10-14 17:50:17.900945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.210 17:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:19.210 [2024-10-14 17:50:18.134511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:19.210 ************************************ 00:33:19.210 START TEST lvs_grow_clean 00:33:19.210 ************************************ 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:19.210 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:19.469 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:19.469 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=161589f7-4498-40f1-80ae-bb485fc1e141 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:19.728 17:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 161589f7-4498-40f1-80ae-bb485fc1e141 lvol 150 00:33:19.987 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ca59c1d6-2b80-4b18-b13b-4366fc18f15b 00:33:19.987 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:19.987 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:20.246 [2024-10-14 17:50:19.194230] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:20.246 [2024-10-14 17:50:19.194375] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:20.246 true 00:33:20.246 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:20.246 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:20.505 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:20.505 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:20.505 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca59c1d6-2b80-4b18-b13b-4366fc18f15b 00:33:20.764 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:21.024 [2024-10-14 17:50:19.906722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.024 17:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1302102 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1302102 /var/tmp/bdevperf.sock 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1302102 ']' 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:21.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:21.024 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.024 [2024-10-14 17:50:20.163732] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:33:21.024 [2024-10-14 17:50:20.163785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302102 ] 00:33:21.283 [2024-10-14 17:50:20.231983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.283 [2024-10-14 17:50:20.274301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.283 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.283 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:33:21.283 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:21.542 Nvme0n1 00:33:21.542 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:21.802 [ 00:33:21.802 { 00:33:21.802 "name": "Nvme0n1", 00:33:21.802 "aliases": [ 00:33:21.802 "ca59c1d6-2b80-4b18-b13b-4366fc18f15b" 00:33:21.802 ], 00:33:21.802 "product_name": "NVMe disk", 00:33:21.802 "block_size": 4096, 00:33:21.802 "num_blocks": 38912, 00:33:21.802 "uuid": "ca59c1d6-2b80-4b18-b13b-4366fc18f15b", 00:33:21.802 "numa_id": 1, 00:33:21.802 "assigned_rate_limits": { 00:33:21.802 "rw_ios_per_sec": 0, 00:33:21.802 "rw_mbytes_per_sec": 0, 00:33:21.802 "r_mbytes_per_sec": 0, 00:33:21.802 "w_mbytes_per_sec": 0 00:33:21.802 }, 00:33:21.802 "claimed": false, 00:33:21.802 "zoned": false, 00:33:21.802 "supported_io_types": { 00:33:21.802 "read": true, 00:33:21.802 "write": true, 00:33:21.802 "unmap": true, 00:33:21.802 "flush": true, 00:33:21.802 "reset": true, 00:33:21.802 "nvme_admin": true, 00:33:21.802 "nvme_io": true, 00:33:21.802 "nvme_io_md": false, 00:33:21.802 "write_zeroes": true, 00:33:21.802 "zcopy": false, 00:33:21.802 "get_zone_info": false, 00:33:21.802 "zone_management": false, 00:33:21.802 "zone_append": false, 00:33:21.802 "compare": true, 00:33:21.802 "compare_and_write": true, 00:33:21.802 "abort": true, 00:33:21.802 "seek_hole": false, 00:33:21.802 "seek_data": false, 00:33:21.802 "copy": true, 00:33:21.802 "nvme_iov_md": false 00:33:21.802 }, 00:33:21.802 "memory_domains": [ 00:33:21.802 { 00:33:21.802 "dma_device_id": "system", 00:33:21.802 "dma_device_type": 1 00:33:21.802 } 00:33:21.802 ], 00:33:21.802 "driver_specific": { 00:33:21.802 "nvme": [ 00:33:21.802 { 00:33:21.802 "trid": { 00:33:21.802 "trtype": "TCP", 00:33:21.802 "adrfam": "IPv4", 00:33:21.802 "traddr": "10.0.0.2", 00:33:21.802 "trsvcid": "4420", 00:33:21.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:21.802 }, 00:33:21.802 "ctrlr_data": { 00:33:21.802 "cntlid": 1, 00:33:21.802 "vendor_id": "0x8086", 00:33:21.802 "model_number": "SPDK bdev Controller", 00:33:21.802 "serial_number": "SPDK0", 00:33:21.802 "firmware_revision": "25.01", 00:33:21.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.802 "oacs": { 00:33:21.802 "security": 0, 00:33:21.802 "format": 0, 00:33:21.802 "firmware": 0, 00:33:21.802 "ns_manage": 0 00:33:21.802 }, 00:33:21.802 "multi_ctrlr": true, 00:33:21.802 "ana_reporting": false 00:33:21.802 }, 00:33:21.802 "vs": { 00:33:21.802 "nvme_version": "1.3" 00:33:21.802 }, 00:33:21.802 "ns_data": { 00:33:21.802 "id": 1, 00:33:21.802 "can_share": true 00:33:21.802 } 00:33:21.802 } 00:33:21.802 ], 00:33:21.802 "mp_policy": "active_passive" 00:33:21.802 } 00:33:21.802 } 00:33:21.802 ] 00:33:21.802 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:21.802 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1302111 00:33:21.802 17:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:21.802 Running I/O for 10 seconds... 00:33:23.180 Latency(us) 00:33:23.180 [2024-10-14T15:50:22.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.180 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:23.180 [2024-10-14T15:50:22.318Z] =================================================================================================================== 00:33:23.180 [2024-10-14T15:50:22.318Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:23.180 00:33:23.747 17:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:24.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.006 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:33:24.006 [2024-10-14T15:50:23.144Z] =================================================================================================================== 00:33:24.006 [2024-10-14T15:50:23.144Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:33:24.006 00:33:24.006 true 00:33:24.006 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:24.006 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:24.265 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:24.265 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:24.265 17:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1302111 00:33:24.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.832 Nvme0n1 : 3.00 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:33:24.832 [2024-10-14T15:50:23.970Z] =================================================================================================================== 00:33:24.832 [2024-10-14T15:50:23.970Z] Total : 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:33:24.832 00:33:26.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.210 Nvme0n1 : 4.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:33:26.210 [2024-10-14T15:50:25.348Z] =================================================================================================================== 00:33:26.210 [2024-10-14T15:50:25.348Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:33:26.210 00:33:27.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.149 Nvme0n1 : 5.00 23571.20 92.08 0.00 0.00 0.00 0.00 0.00 00:33:27.149 [2024-10-14T15:50:26.287Z] =================================================================================================================== 00:33:27.149 [2024-10-14T15:50:26.287Z] Total : 23571.20 92.08 0.00 0.00 0.00 0.00 0.00 00:33:27.149 00:33:28.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.085 Nvme0n1 : 6.00 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:33:28.085 [2024-10-14T15:50:27.223Z] =================================================================================================================== 00:33:28.085 [2024-10-14T15:50:27.223Z] Total : 23622.00 92.27 0.00 0.00 0.00 0.00 0.00 00:33:28.085 00:33:29.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.022 Nvme0n1 : 7.00 23649.29 92.38 0.00 0.00 0.00 0.00 0.00 00:33:29.022 [2024-10-14T15:50:28.160Z] =================================================================================================================== 00:33:29.022 [2024-10-14T15:50:28.160Z] Total : 23649.29 92.38 0.00 0.00 0.00 0.00 0.00 00:33:29.022 00:33:29.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.959 Nvme0n1 : 8.00 23677.62 92.49 0.00 0.00 0.00 0.00 0.00 00:33:29.959 [2024-10-14T15:50:29.097Z] =================================================================================================================== 00:33:29.959 [2024-10-14T15:50:29.097Z] Total : 23677.62 92.49 0.00 0.00 0.00 0.00 0.00 00:33:29.959 00:33:30.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.895 Nvme0n1 : 9.00 23699.67 92.58 0.00 0.00 0.00 0.00 0.00 00:33:30.895 [2024-10-14T15:50:30.033Z] =================================================================================================================== 00:33:30.895 [2024-10-14T15:50:30.033Z] Total : 23699.67 92.58 0.00 0.00 0.00 0.00 0.00 00:33:30.895 00:33:31.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.832 Nvme0n1 : 10.00 23666.50 92.45 0.00 0.00 0.00 0.00 0.00 00:33:31.832 [2024-10-14T15:50:30.970Z] =================================================================================================================== 00:33:31.832 [2024-10-14T15:50:30.970Z] Total : 23666.50 92.45 0.00 0.00 0.00 0.00 0.00 00:33:31.832 00:33:31.832 00:33:31.832 Latency(us) 00:33:31.832 [2024-10-14T15:50:30.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.832 Nvme0n1 : 10.00 23673.35 92.47 0.00 0.00 5404.21 3510.86 25964.74 00:33:31.832 [2024-10-14T15:50:30.970Z] =================================================================================================================== 00:33:31.832 [2024-10-14T15:50:30.970Z] Total : 23673.35 92.47 0.00 0.00 5404.21 3510.86 25964.74 00:33:31.832 { 00:33:31.832 "results": [ 00:33:31.832 { 00:33:31.832 "job": "Nvme0n1", 00:33:31.832 "core_mask": "0x2", 00:33:31.832 "workload": "randwrite", 00:33:31.832 "status": "finished", 00:33:31.832 "queue_depth": 128, 00:33:31.832 "io_size": 4096, 00:33:31.832 "runtime": 10.002514, 00:33:31.832 "iops": 23673.348520182026, 00:33:31.832 "mibps": 92.47401765696104, 00:33:31.832 "io_failed": 0, 00:33:31.832 "io_timeout": 0, 00:33:31.832 "avg_latency_us": 5404.205985915365, 00:33:31.832 "min_latency_us": 3510.8571428571427, 00:33:31.832 "max_latency_us": 25964.73904761905 00:33:31.832 } 00:33:31.832 ], 00:33:31.832 "core_count": 1 00:33:31.832 } 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1302102 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1302102 ']' 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1302102 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.832 17:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1302102 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1302102' 00:33:32.091 killing process with pid 1302102 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1302102 00:33:32.091 Received shutdown signal, test time was about 10.000000 seconds 00:33:32.091 00:33:32.091 Latency(us) 00:33:32.091 [2024-10-14T15:50:31.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.091 [2024-10-14T15:50:31.229Z] =================================================================================================================== 00:33:32.091 [2024-10-14T15:50:31.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1302102 00:33:32.091 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:32.350 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:32.609 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:32.609 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:32.869 [2024-10-14 17:50:31.938302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:32.869 17:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:33.128 request: 00:33:33.128 { 00:33:33.128 "uuid": "161589f7-4498-40f1-80ae-bb485fc1e141", 00:33:33.128 "method": "bdev_lvol_get_lvstores", 00:33:33.128 "req_id": 1 00:33:33.128 } 00:33:33.128 Got JSON-RPC error response 00:33:33.128 response: 00:33:33.128 { 00:33:33.128 "code": -19, 00:33:33.128 "message": "No such device" 00:33:33.128 } 00:33:33.128 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:33:33.128 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:33.128 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:33.128 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:33.128 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:33.387 aio_bdev 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ca59c1d6-2b80-4b18-b13b-4366fc18f15b 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ca59c1d6-2b80-4b18-b13b-4366fc18f15b 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:33.387 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:33.647 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca59c1d6-2b80-4b18-b13b-4366fc18f15b -t 2000 00:33:33.647 [ 00:33:33.647 { 00:33:33.647 "name": "ca59c1d6-2b80-4b18-b13b-4366fc18f15b", 00:33:33.647 "aliases": [ 00:33:33.647 "lvs/lvol" 00:33:33.647 ], 00:33:33.647 "product_name": "Logical Volume", 00:33:33.647 "block_size": 4096, 00:33:33.647 "num_blocks": 38912, 00:33:33.647 "uuid": "ca59c1d6-2b80-4b18-b13b-4366fc18f15b", 00:33:33.647 "assigned_rate_limits": { 00:33:33.647 "rw_ios_per_sec": 0, 00:33:33.647 "rw_mbytes_per_sec": 0, 00:33:33.647 "r_mbytes_per_sec": 0, 00:33:33.647 "w_mbytes_per_sec": 0 00:33:33.647 }, 00:33:33.647 "claimed": false, 00:33:33.647 "zoned": false, 00:33:33.647 "supported_io_types": { 00:33:33.647 "read": true, 00:33:33.647 "write": true, 00:33:33.647 "unmap": true, 00:33:33.647 "flush": false, 00:33:33.647 "reset": true, 00:33:33.647 "nvme_admin": false, 00:33:33.647 "nvme_io": false, 00:33:33.647 "nvme_io_md": false, 00:33:33.647 "write_zeroes": true, 00:33:33.647 "zcopy": false, 00:33:33.647 "get_zone_info": false, 00:33:33.647 "zone_management": false, 00:33:33.647 "zone_append": false, 00:33:33.647 "compare": false, 00:33:33.647 "compare_and_write": false, 00:33:33.647 "abort": false, 00:33:33.647 "seek_hole": true, 00:33:33.647 "seek_data": true, 00:33:33.647 "copy": false, 00:33:33.647 "nvme_iov_md": false 00:33:33.647 }, 00:33:33.647 "driver_specific": { 00:33:33.647 "lvol": { 00:33:33.647 "lvol_store_uuid": "161589f7-4498-40f1-80ae-bb485fc1e141", 00:33:33.647 "base_bdev": "aio_bdev", 00:33:33.647 "thin_provision": false, 00:33:33.647 "num_allocated_clusters": 38, 00:33:33.647 "snapshot": false, 00:33:33.647 "clone": false, 00:33:33.647 "esnap_clone": false 00:33:33.647 } 00:33:33.647 } 00:33:33.647 } 00:33:33.647 ] 00:33:33.647 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:33:33.647 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:33.647 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:33.906 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:33.906 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:33.906 17:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:34.165 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:34.165 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca59c1d6-2b80-4b18-b13b-4366fc18f15b 00:33:34.165 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 161589f7-4498-40f1-80ae-bb485fc1e141 00:33:34.423 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:34.683 00:33:34.683 real 0m15.508s 00:33:34.683 user 0m14.993s 00:33:34.683 sys 0m1.507s 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.683 ************************************ 00:33:34.683 END TEST lvs_grow_clean 00:33:34.683 ************************************ 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:34.683 ************************************ 00:33:34.683 START TEST lvs_grow_dirty 00:33:34.683 ************************************ 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:34.683 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:34.942 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:34.942 17:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:35.201 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:35.201 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:35.201 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:35.459 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:35.459 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:35.460 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56d07605-d4f4-4a02-955e-288ac27a4f7f lvol 150 00:33:35.460 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=66c2869b-1c35-4207-946c-ab959dffd836 00:33:35.460 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:35.460 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:35.718 [2024-10-14 17:50:34.754228] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:35.718 [2024-10-14 17:50:34.754358] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:35.718 true 00:33:35.718 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:35.718 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:35.977 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:35.977 17:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:36.237 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66c2869b-1c35-4207-946c-ab959dffd836 00:33:36.237 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:36.496 [2024-10-14 17:50:35.530684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.496 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1304540 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1304540 /var/tmp/bdevperf.sock 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1304540 ']' 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:36.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:36.756 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:36.756 [2024-10-14 17:50:35.769219] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:33:36.756 [2024-10-14 17:50:35.769268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304540 ] 00:33:36.756 [2024-10-14 17:50:35.837712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.756 [2024-10-14 17:50:35.880060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.015 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:37.015 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:33:37.015 17:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:37.274 Nvme0n1 00:33:37.274 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:37.533 [ 00:33:37.533 { 00:33:37.534 "name": "Nvme0n1", 00:33:37.534 "aliases": [ 00:33:37.534 "66c2869b-1c35-4207-946c-ab959dffd836" 00:33:37.534 ], 00:33:37.534 "product_name": "NVMe disk", 00:33:37.534 "block_size": 4096, 00:33:37.534 "num_blocks": 38912, 00:33:37.534 "uuid": "66c2869b-1c35-4207-946c-ab959dffd836", 00:33:37.534 "numa_id": 1, 00:33:37.534 "assigned_rate_limits": { 00:33:37.534 "rw_ios_per_sec": 0, 00:33:37.534 "rw_mbytes_per_sec": 0, 00:33:37.534 "r_mbytes_per_sec": 0, 00:33:37.534 "w_mbytes_per_sec": 0 00:33:37.534 }, 00:33:37.534 "claimed": false, 00:33:37.534 "zoned": false, 00:33:37.534 "supported_io_types": { 00:33:37.534 "read": true, 00:33:37.534 "write": true, 00:33:37.534 "unmap": true, 00:33:37.534 "flush": true, 00:33:37.534 "reset": true, 00:33:37.534 "nvme_admin": true, 00:33:37.534 "nvme_io": true, 00:33:37.534 "nvme_io_md": false, 00:33:37.534 "write_zeroes": true, 00:33:37.534 "zcopy": false, 00:33:37.534 "get_zone_info": false, 00:33:37.534 "zone_management": false, 00:33:37.534 "zone_append": false, 00:33:37.534 "compare": true, 00:33:37.534 "compare_and_write": true, 00:33:37.534 "abort": true, 00:33:37.534 "seek_hole": false, 00:33:37.534 "seek_data": false, 00:33:37.534 "copy": true, 00:33:37.534 "nvme_iov_md": false 00:33:37.534 }, 00:33:37.534 "memory_domains": [ 00:33:37.534 { 00:33:37.534 "dma_device_id": "system", 00:33:37.534 "dma_device_type": 1 00:33:37.534 } 00:33:37.534 ], 00:33:37.534 "driver_specific": { 00:33:37.534 "nvme": [ 00:33:37.534 { 00:33:37.534 "trid": { 00:33:37.534 "trtype": "TCP", 00:33:37.534 "adrfam": "IPv4", 00:33:37.534 "traddr": "10.0.0.2", 00:33:37.534 "trsvcid": "4420", 00:33:37.534 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:37.534 }, 00:33:37.534 "ctrlr_data": { 00:33:37.534 "cntlid": 1, 00:33:37.534 "vendor_id": "0x8086", 00:33:37.534 "model_number": "SPDK bdev Controller", 00:33:37.534 "serial_number": "SPDK0", 00:33:37.534 "firmware_revision": "25.01", 00:33:37.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.534 "oacs": { 00:33:37.534 "security": 0, 00:33:37.534 "format": 0, 00:33:37.534 "firmware": 0, 00:33:37.534 "ns_manage": 0 00:33:37.534 }, 00:33:37.534 "multi_ctrlr": true, 00:33:37.534 "ana_reporting": false 00:33:37.534 }, 00:33:37.534 "vs": { 00:33:37.534 "nvme_version": "1.3" 00:33:37.534 }, 00:33:37.534 "ns_data": { 00:33:37.534 "id": 1, 00:33:37.534 "can_share": true 00:33:37.534 } 00:33:37.534 } 00:33:37.534 ], 00:33:37.534 "mp_policy": "active_passive" 00:33:37.534 } 00:33:37.534 } 00:33:37.534 ] 00:33:37.534 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:37.534 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1304693 00:33:37.534 17:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:37.534 Running I/O for 10 seconds... 00:33:38.912 Latency(us) 00:33:38.912 [2024-10-14T15:50:38.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.912 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:38.912 [2024-10-14T15:50:38.050Z] =================================================================================================================== 00:33:38.912 [2024-10-14T15:50:38.050Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:33:38.912 00:33:39.480 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:39.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.480 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:33:39.480 [2024-10-14T15:50:38.618Z] =================================================================================================================== 00:33:39.481 [2024-10-14T15:50:38.619Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:33:39.481 00:33:39.741 true 00:33:39.741 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:39.741 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:40.001 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:40.001 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:40.001 17:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1304693 00:33:40.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:40.569 Nvme0n1 : 3.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:33:40.569 [2024-10-14T15:50:39.707Z] =================================================================================================================== 00:33:40.569 [2024-10-14T15:50:39.707Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:33:40.569 00:33:41.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.507 Nvme0n1 : 4.00 23352.25 91.22 0.00 0.00 0.00 0.00 0.00 00:33:41.507 [2024-10-14T15:50:40.645Z] =================================================================================================================== 00:33:41.507 [2024-10-14T15:50:40.645Z] Total : 23352.25 91.22 0.00 0.00 0.00 0.00 0.00 00:33:41.507 00:33:42.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.886 Nvme0n1 : 5.00 23454.00 91.62 0.00 0.00 0.00 0.00 0.00 00:33:42.886 [2024-10-14T15:50:42.024Z] =================================================================================================================== 00:33:42.886 [2024-10-14T15:50:42.024Z] Total : 23454.00 91.62 0.00 0.00 0.00 0.00 0.00 00:33:42.886 00:33:43.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.823 Nvme0n1 : 6.00 23524.33 91.89 0.00 0.00 0.00 0.00 0.00 00:33:43.823 [2024-10-14T15:50:42.961Z] =================================================================================================================== 00:33:43.823 [2024-10-14T15:50:42.961Z] Total : 23524.33 91.89 0.00 0.00 0.00 0.00 0.00 00:33:43.823 00:33:44.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.761 Nvme0n1 : 7.00 23556.43 92.02 0.00 0.00 0.00 0.00 0.00 00:33:44.761 [2024-10-14T15:50:43.899Z] =================================================================================================================== 00:33:44.761 [2024-10-14T15:50:43.899Z] Total : 23556.43 92.02 0.00 0.00 0.00 0.00 0.00 00:33:44.761 00:33:45.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.698 Nvme0n1 : 8.00 23596.38 92.17 0.00 0.00 0.00 0.00 0.00 00:33:45.698 [2024-10-14T15:50:44.836Z] =================================================================================================================== 00:33:45.698 [2024-10-14T15:50:44.836Z] Total : 23596.38 92.17 0.00 0.00 0.00 0.00 0.00 00:33:45.698 00:33:46.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.636 Nvme0n1 : 9.00 23627.44 92.29 0.00 0.00 0.00 0.00 0.00 00:33:46.636 [2024-10-14T15:50:45.774Z] =================================================================================================================== 00:33:46.636 [2024-10-14T15:50:45.774Z] Total : 23627.44 92.29 0.00 0.00 0.00 0.00 0.00 00:33:46.636 00:33:47.573 00:33:47.573 Latency(us) 00:33:47.573 [2024-10-14T15:50:46.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.573 Nvme0n1 : 10.00 23649.10 92.38 0.00 0.00 5409.52 3120.76 26089.57 00:33:47.573 [2024-10-14T15:50:46.711Z] =================================================================================================================== 00:33:47.573 [2024-10-14T15:50:46.711Z] Total : 23649.10 92.38 0.00 0.00 5409.52 3120.76 26089.57 00:33:47.573 { 00:33:47.573 "results": [ 00:33:47.573 { 00:33:47.573 "job": "Nvme0n1", 00:33:47.573 "core_mask": "0x2", 00:33:47.573 "workload": "randwrite", 00:33:47.573 "status": "finished", 00:33:47.573 "queue_depth": 128, 00:33:47.573 "io_size": 4096, 00:33:47.573 "runtime": 10.001396, 00:33:47.573 "iops": 23649.09858583742, 00:33:47.573 "mibps": 92.37929135092742, 00:33:47.573 "io_failed": 0, 00:33:47.573 "io_timeout": 0, 00:33:47.573 "avg_latency_us": 5409.523782722946, 00:33:47.573 "min_latency_us": 3120.7619047619046, 00:33:47.573 "max_latency_us": 26089.569523809525 00:33:47.573 } 00:33:47.573 ], 00:33:47.573 "core_count": 1 00:33:47.573 } 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1304540 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1304540 ']' 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1304540 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.573 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1304540 00:33:47.832 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:47.833 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:47.833 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1304540' 00:33:47.833 killing process with pid 1304540 00:33:47.833 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1304540 00:33:47.833 Received shutdown signal, test time was about 10.000000 seconds 00:33:47.833 00:33:47.833 Latency(us) 00:33:47.833 [2024-10-14T15:50:46.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.833 [2024-10-14T15:50:46.971Z] =================================================================================================================== 00:33:47.833 [2024-10-14T15:50:46.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.833 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1304540 00:33:47.833 17:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:48.091 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1301604 00:33:48.350 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1301604 00:33:48.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1301604 Killed "${NVMF_APP[@]}" "$@" 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1306509 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1306509 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1306509 ']' 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:48.610 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:48.610 [2024-10-14 17:50:47.564629] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:48.610 [2024-10-14 17:50:47.565541] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:33:48.610 [2024-10-14 17:50:47.565578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.610 [2024-10-14 17:50:47.637948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.610 [2024-10-14 17:50:47.678263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.610 [2024-10-14 17:50:47.678298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.610 [2024-10-14 17:50:47.678305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.610 [2024-10-14 17:50:47.678310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.610 [2024-10-14 17:50:47.678315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.610 [2024-10-14 17:50:47.678843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.610 [2024-10-14 17:50:47.745897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:48.610 [2024-10-14 17:50:47.746118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.869 17:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:48.869 [2024-10-14 17:50:47.978794] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:48.869 [2024-10-14 17:50:47.978928] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:48.869 [2024-10-14 17:50:47.978984] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 66c2869b-1c35-4207-946c-ab959dffd836 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=66c2869b-1c35-4207-946c-ab959dffd836 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:49.128 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66c2869b-1c35-4207-946c-ab959dffd836 -t 2000 00:33:49.387 [ 00:33:49.387 { 00:33:49.387 "name": "66c2869b-1c35-4207-946c-ab959dffd836", 00:33:49.387 "aliases": [ 00:33:49.387 "lvs/lvol" 00:33:49.387 ], 00:33:49.387 "product_name": "Logical Volume", 00:33:49.387 "block_size": 4096, 00:33:49.387 "num_blocks": 38912, 00:33:49.387 "uuid": "66c2869b-1c35-4207-946c-ab959dffd836", 00:33:49.387 "assigned_rate_limits": { 00:33:49.387 "rw_ios_per_sec": 0, 00:33:49.387 "rw_mbytes_per_sec": 0, 00:33:49.387 "r_mbytes_per_sec": 0, 00:33:49.387 "w_mbytes_per_sec": 0 00:33:49.387 }, 00:33:49.387 "claimed": false, 00:33:49.387 "zoned": false, 00:33:49.387 "supported_io_types": { 00:33:49.387 "read": true, 00:33:49.387 "write": true, 00:33:49.387 "unmap": true, 00:33:49.387 "flush": false, 00:33:49.387 "reset": true, 00:33:49.387 "nvme_admin": false, 00:33:49.387 "nvme_io": false, 00:33:49.387 "nvme_io_md": false, 00:33:49.387 "write_zeroes": true, 00:33:49.387 "zcopy": false, 00:33:49.387 "get_zone_info": false, 00:33:49.387 "zone_management": false, 00:33:49.387 "zone_append": false, 00:33:49.387 "compare": false, 00:33:49.387 "compare_and_write": false, 00:33:49.387 "abort": false, 00:33:49.387 "seek_hole": true, 00:33:49.387 "seek_data": true, 00:33:49.387 "copy": false, 00:33:49.387 "nvme_iov_md": false 00:33:49.387 }, 00:33:49.387 "driver_specific": { 00:33:49.387 "lvol": { 00:33:49.387 "lvol_store_uuid": "56d07605-d4f4-4a02-955e-288ac27a4f7f", 00:33:49.387 "base_bdev": "aio_bdev", 00:33:49.387 "thin_provision": false, 00:33:49.387 "num_allocated_clusters": 38, 00:33:49.387 "snapshot": false, 00:33:49.387 "clone": false, 00:33:49.387 "esnap_clone": false 00:33:49.387 } 00:33:49.387 } 00:33:49.387 } 00:33:49.387 ] 00:33:49.387 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:33:49.387 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:49.387 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:49.646 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:49.646 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:49.646 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:49.905 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:49.905 17:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:49.905 [2024-10-14 17:50:48.967244] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:49.905 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:50.164 request: 00:33:50.164 { 00:33:50.164 "uuid": "56d07605-d4f4-4a02-955e-288ac27a4f7f", 00:33:50.164 "method": "bdev_lvol_get_lvstores", 00:33:50.164 "req_id": 1 00:33:50.164 } 00:33:50.164 Got JSON-RPC error response 00:33:50.164 response: 00:33:50.164 { 00:33:50.164 "code": -19, 00:33:50.164 "message": "No such device" 00:33:50.164 } 00:33:50.164 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:33:50.164 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:50.164 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:50.164 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:50.164 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:50.424 aio_bdev 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66c2869b-1c35-4207-946c-ab959dffd836 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=66c2869b-1c35-4207-946c-ab959dffd836 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:50.424 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:50.683 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66c2869b-1c35-4207-946c-ab959dffd836 -t 2000 00:33:50.683 [ 00:33:50.683 { 00:33:50.683 "name": "66c2869b-1c35-4207-946c-ab959dffd836", 00:33:50.683 "aliases": [ 00:33:50.683 "lvs/lvol" 00:33:50.683 ], 00:33:50.683 "product_name": "Logical Volume", 00:33:50.683 "block_size": 4096, 00:33:50.683 "num_blocks": 38912, 00:33:50.683 "uuid": "66c2869b-1c35-4207-946c-ab959dffd836", 00:33:50.683 "assigned_rate_limits": { 00:33:50.683 "rw_ios_per_sec": 0, 00:33:50.683 "rw_mbytes_per_sec": 0, 00:33:50.683 "r_mbytes_per_sec": 0, 00:33:50.683 "w_mbytes_per_sec": 0 00:33:50.683 }, 00:33:50.683 "claimed": false, 00:33:50.683 "zoned": false, 00:33:50.683 "supported_io_types": { 00:33:50.683 "read": true, 00:33:50.683 "write": true, 00:33:50.683 "unmap": true, 00:33:50.683 "flush": false, 00:33:50.683 "reset": true, 00:33:50.683 "nvme_admin": false, 00:33:50.683 "nvme_io": false, 00:33:50.683 "nvme_io_md": false, 00:33:50.683 "write_zeroes": true, 00:33:50.683 "zcopy": false, 00:33:50.683 "get_zone_info": false, 00:33:50.683 "zone_management": false, 00:33:50.683 "zone_append": false, 00:33:50.683 "compare": false, 00:33:50.683 "compare_and_write": false, 00:33:50.683 "abort": false, 00:33:50.683 "seek_hole": true, 00:33:50.683 "seek_data": true, 00:33:50.683 "copy": false, 00:33:50.683 "nvme_iov_md": false 00:33:50.683 }, 00:33:50.683 "driver_specific": { 00:33:50.683 "lvol": { 00:33:50.683 "lvol_store_uuid": "56d07605-d4f4-4a02-955e-288ac27a4f7f", 00:33:50.683 "base_bdev": "aio_bdev", 00:33:50.683 "thin_provision": false, 00:33:50.683 "num_allocated_clusters": 38, 00:33:50.683 "snapshot": false, 00:33:50.683 "clone": false, 00:33:50.683 "esnap_clone": false 00:33:50.683 } 00:33:50.683 } 00:33:50.683 } 00:33:50.683 ] 00:33:50.683 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:33:50.683 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:50.683 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:50.942 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:50.943 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:50.943 17:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:51.201 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:51.201 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66c2869b-1c35-4207-946c-ab959dffd836 00:33:51.461 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56d07605-d4f4-4a02-955e-288ac27a4f7f 00:33:51.461 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:51.720 00:33:51.720 real 0m16.994s 00:33:51.720 user 0m34.405s 00:33:51.720 sys 0m3.853s 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:51.720 ************************************ 00:33:51.720 END TEST lvs_grow_dirty 00:33:51.720 ************************************ 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:33:51.720 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:51.720 nvmf_trace.0 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:51.979 rmmod nvme_tcp 00:33:51.979 rmmod nvme_fabrics 00:33:51.979 rmmod nvme_keyring 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1306509 ']' 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1306509 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1306509 ']' 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1306509 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306509 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306509' 00:33:51.979 killing process with pid 1306509 00:33:51.979 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1306509 00:33:51.980 17:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1306509 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.239 17:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.143 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.144 00:33:54.144 real 0m41.687s 00:33:54.144 user 0m51.904s 00:33:54.144 sys 0m10.236s 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:54.144 ************************************ 00:33:54.144 END TEST nvmf_lvs_grow 00:33:54.144 ************************************ 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.144 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:54.403 ************************************ 00:33:54.403 START TEST nvmf_bdev_io_wait 00:33:54.403 ************************************ 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:54.404 * Looking for test storage... 00:33:54.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:54.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.404 --rc genhtml_branch_coverage=1 00:33:54.404 --rc genhtml_function_coverage=1 00:33:54.404 --rc genhtml_legend=1 00:33:54.404 --rc geninfo_all_blocks=1 00:33:54.404 --rc geninfo_unexecuted_blocks=1 00:33:54.404 00:33:54.404 ' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:54.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.404 --rc genhtml_branch_coverage=1 00:33:54.404 --rc genhtml_function_coverage=1 00:33:54.404 --rc genhtml_legend=1 00:33:54.404 --rc geninfo_all_blocks=1 00:33:54.404 --rc geninfo_unexecuted_blocks=1 00:33:54.404 00:33:54.404 ' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:54.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.404 --rc genhtml_branch_coverage=1 00:33:54.404 --rc genhtml_function_coverage=1 00:33:54.404 --rc genhtml_legend=1 00:33:54.404 --rc geninfo_all_blocks=1 00:33:54.404 --rc geninfo_unexecuted_blocks=1 00:33:54.404 00:33:54.404 ' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:54.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.404 --rc genhtml_branch_coverage=1 00:33:54.404 --rc genhtml_function_coverage=1 00:33:54.404 --rc genhtml_legend=1 00:33:54.404 --rc geninfo_all_blocks=1 00:33:54.404 --rc geninfo_unexecuted_blocks=1 00:33:54.404 00:33:54.404 ' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.404 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.405 17:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:01.075 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:01.075 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.075 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:01.076 Found net devices under 0000:86:00.0: cvl_0_0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:01.076 Found net devices under 0000:86:00.1: cvl_0_1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:34:01.076 00:34:01.076 --- 10.0.0.2 ping statistics --- 00:34:01.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.076 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:34:01.076 00:34:01.076 --- 10.0.0.1 ping statistics --- 00:34:01.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.076 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1310578 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1310578 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1310578 ']' 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 [2024-10-14 17:50:59.498149] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:01.076 [2024-10-14 17:50:59.499039] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:01.076 [2024-10-14 17:50:59.499071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.076 [2024-10-14 17:50:59.572016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:01.076 [2024-10-14 17:50:59.618221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.076 [2024-10-14 17:50:59.618256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.076 [2024-10-14 17:50:59.618262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.076 [2024-10-14 17:50:59.618268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.076 [2024-10-14 17:50:59.618273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.076 [2024-10-14 17:50:59.619799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.076 [2024-10-14 17:50:59.619910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:01.076 [2024-10-14 17:50:59.620039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.076 [2024-10-14 17:50:59.620039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:01.076 [2024-10-14 17:50:59.620295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 [2024-10-14 17:50:59.757270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:01.077 [2024-10-14 17:50:59.757948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:01.077 [2024-10-14 17:50:59.758158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:01.077 [2024-10-14 17:50:59.758289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 [2024-10-14 17:50:59.768679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 Malloc0 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.077 [2024-10-14 17:50:59.840866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1310603 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1310605 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:01.077 { 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme$subsystem", 00:34:01.077 "trtype": "$TEST_TRANSPORT", 00:34:01.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.077 "adrfam": "ipv4", 00:34:01.077 "trsvcid": "$NVMF_PORT", 00:34:01.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.077 "hdgst": ${hdgst:-false}, 00:34:01.077 "ddgst": ${ddgst:-false} 00:34:01.077 }, 00:34:01.077 "method": "bdev_nvme_attach_controller" 00:34:01.077 } 00:34:01.077 EOF 00:34:01.077 )") 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1310607 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:01.077 { 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme$subsystem", 00:34:01.077 "trtype": "$TEST_TRANSPORT", 00:34:01.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.077 "adrfam": "ipv4", 00:34:01.077 "trsvcid": "$NVMF_PORT", 00:34:01.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.077 "hdgst": ${hdgst:-false}, 00:34:01.077 "ddgst": ${ddgst:-false} 00:34:01.077 }, 00:34:01.077 "method": "bdev_nvme_attach_controller" 00:34:01.077 } 00:34:01.077 EOF 00:34:01.077 )") 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1310610 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:01.077 { 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme$subsystem", 00:34:01.077 "trtype": "$TEST_TRANSPORT", 00:34:01.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.077 "adrfam": "ipv4", 00:34:01.077 "trsvcid": "$NVMF_PORT", 00:34:01.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.077 "hdgst": ${hdgst:-false}, 00:34:01.077 "ddgst": ${ddgst:-false} 00:34:01.077 }, 00:34:01.077 "method": "bdev_nvme_attach_controller" 00:34:01.077 } 00:34:01.077 EOF 00:34:01.077 )") 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:01.077 { 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme$subsystem", 00:34:01.077 "trtype": "$TEST_TRANSPORT", 00:34:01.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.077 "adrfam": "ipv4", 00:34:01.077 "trsvcid": "$NVMF_PORT", 00:34:01.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.077 "hdgst": ${hdgst:-false}, 00:34:01.077 "ddgst": ${ddgst:-false} 00:34:01.077 }, 00:34:01.077 "method": "bdev_nvme_attach_controller" 00:34:01.077 } 00:34:01.077 EOF 00:34:01.077 )") 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1310603 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme1", 00:34:01.077 "trtype": "tcp", 00:34:01.077 "traddr": "10.0.0.2", 00:34:01.077 "adrfam": "ipv4", 00:34:01.077 "trsvcid": "4420", 00:34:01.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.077 "hdgst": false, 00:34:01.077 "ddgst": false 00:34:01.077 }, 00:34:01.077 "method": "bdev_nvme_attach_controller" 00:34:01.077 }' 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:01.077 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:01.077 "params": { 00:34:01.077 "name": "Nvme1", 00:34:01.077 "trtype": "tcp", 00:34:01.077 "traddr": "10.0.0.2", 00:34:01.078 "adrfam": "ipv4", 00:34:01.078 "trsvcid": "4420", 00:34:01.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.078 "hdgst": false, 00:34:01.078 "ddgst": false 00:34:01.078 }, 00:34:01.078 "method": "bdev_nvme_attach_controller" 00:34:01.078 }' 00:34:01.078 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:01.078 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:01.078 "params": { 00:34:01.078 "name": "Nvme1", 00:34:01.078 "trtype": "tcp", 00:34:01.078 "traddr": "10.0.0.2", 00:34:01.078 "adrfam": "ipv4", 00:34:01.078 "trsvcid": "4420", 00:34:01.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.078 "hdgst": false, 00:34:01.078 "ddgst": false 00:34:01.078 }, 00:34:01.078 "method": "bdev_nvme_attach_controller" 00:34:01.078 }' 00:34:01.078 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:34:01.078 17:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:01.078 "params": { 00:34:01.078 "name": "Nvme1", 00:34:01.078 "trtype": "tcp", 00:34:01.078 "traddr": "10.0.0.2", 00:34:01.078 "adrfam": "ipv4", 00:34:01.078 "trsvcid": "4420", 00:34:01.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.078 "hdgst": false, 00:34:01.078 "ddgst": false 00:34:01.078 }, 00:34:01.078 "method": "bdev_nvme_attach_controller" 00:34:01.078 }' 00:34:01.078 [2024-10-14 17:50:59.890645] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:01.078 [2024-10-14 17:50:59.890694] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:01.078 [2024-10-14 17:50:59.892406] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:01.078 [2024-10-14 17:50:59.892445] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:01.078 [2024-10-14 17:50:59.893450] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:01.078 [2024-10-14 17:50:59.893490] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:01.078 [2024-10-14 17:50:59.899670] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:01.078 [2024-10-14 17:50:59.899716] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:01.078 [2024-10-14 17:51:00.069942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.078 [2024-10-14 17:51:00.113781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:01.078 [2024-10-14 17:51:00.153058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.078 [2024-10-14 17:51:00.195539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:01.336 [2024-10-14 17:51:00.259148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.336 [2024-10-14 17:51:00.305652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:01.336 [2024-10-14 17:51:00.318280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.336 [2024-10-14 17:51:00.360954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:01.336 Running I/O for 1 seconds... 00:34:01.336 Running I/O for 1 seconds... 00:34:01.595 Running I/O for 1 seconds... 00:34:01.595 Running I/O for 1 seconds... 00:34:02.530 8940.00 IOPS, 34.92 MiB/s [2024-10-14T15:51:01.668Z] 253240.00 IOPS, 989.22 MiB/s 00:34:02.530 Latency(us) 00:34:02.530 [2024-10-14T15:51:01.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.530 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:02.530 Nvme1n1 : 1.00 252863.39 987.75 0.00 0.00 504.20 222.35 1482.36 00:34:02.530 [2024-10-14T15:51:01.668Z] =================================================================================================================== 00:34:02.530 [2024-10-14T15:51:01.668Z] Total : 252863.39 987.75 0.00 0.00 504.20 222.35 1482.36 00:34:02.530 00:34:02.530 Latency(us) 00:34:02.530 [2024-10-14T15:51:01.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.530 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:02.530 Nvme1n1 : 1.02 8947.49 34.95 0.00 0.00 14206.91 3276.80 23093.64 00:34:02.530 [2024-10-14T15:51:01.668Z] =================================================================================================================== 00:34:02.530 [2024-10-14T15:51:01.668Z] Total : 8947.49 34.95 0.00 0.00 14206.91 3276.80 23093.64 00:34:02.530 7874.00 IOPS, 30.76 MiB/s 00:34:02.530 Latency(us) 00:34:02.530 [2024-10-14T15:51:01.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.530 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:02.530 Nvme1n1 : 1.01 7958.52 31.09 0.00 0.00 16034.91 4681.14 25964.74 00:34:02.530 [2024-10-14T15:51:01.668Z] =================================================================================================================== 00:34:02.530 [2024-10-14T15:51:01.668Z] Total : 7958.52 31.09 0.00 0.00 16034.91 4681.14 25964.74 00:34:02.530 12028.00 IOPS, 46.98 MiB/s 00:34:02.530 Latency(us) 00:34:02.530 [2024-10-14T15:51:01.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.530 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:02.530 Nvme1n1 : 1.01 12120.87 47.35 0.00 0.00 10533.60 3635.69 15104.49 00:34:02.530 [2024-10-14T15:51:01.668Z] =================================================================================================================== 00:34:02.530 [2024-10-14T15:51:01.668Z] Total : 12120.87 47.35 0.00 0.00 10533.60 3635.69 15104.49 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1310605 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1310607 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1310610 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.788 rmmod nvme_tcp 00:34:02.788 rmmod nvme_fabrics 00:34:02.788 rmmod nvme_keyring 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1310578 ']' 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1310578 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1310578 ']' 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1310578 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1310578 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1310578' 00:34:02.788 killing process with pid 1310578 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1310578 00:34:02.788 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1310578 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:03.047 17:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:34:03.047 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.047 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.047 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.047 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.047 17:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.953 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.953 00:34:04.953 real 0m10.776s 00:34:04.953 user 0m14.989s 00:34:04.953 sys 0m6.442s 00:34:04.953 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:04.953 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:04.953 ************************************ 00:34:04.953 END TEST nvmf_bdev_io_wait 00:34:04.953 ************************************ 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.212 ************************************ 00:34:05.212 START TEST nvmf_queue_depth 00:34:05.212 ************************************ 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:05.212 * Looking for test storage... 00:34:05.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.212 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.213 --rc genhtml_branch_coverage=1 00:34:05.213 --rc genhtml_function_coverage=1 00:34:05.213 --rc genhtml_legend=1 00:34:05.213 --rc geninfo_all_blocks=1 00:34:05.213 --rc geninfo_unexecuted_blocks=1 00:34:05.213 00:34:05.213 ' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.213 --rc genhtml_branch_coverage=1 00:34:05.213 --rc genhtml_function_coverage=1 00:34:05.213 --rc genhtml_legend=1 00:34:05.213 --rc geninfo_all_blocks=1 00:34:05.213 --rc geninfo_unexecuted_blocks=1 00:34:05.213 00:34:05.213 ' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.213 --rc genhtml_branch_coverage=1 00:34:05.213 --rc genhtml_function_coverage=1 00:34:05.213 --rc genhtml_legend=1 00:34:05.213 --rc geninfo_all_blocks=1 00:34:05.213 --rc geninfo_unexecuted_blocks=1 00:34:05.213 00:34:05.213 ' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:05.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.213 --rc genhtml_branch_coverage=1 00:34:05.213 --rc genhtml_function_coverage=1 00:34:05.213 --rc genhtml_legend=1 00:34:05.213 --rc geninfo_all_blocks=1 00:34:05.213 --rc geninfo_unexecuted_blocks=1 00:34:05.213 00:34:05.213 ' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.213 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:05.214 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:05.214 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:05.214 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.214 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.214 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.472 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:05.472 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:05.472 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.472 17:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:12.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.044 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:12.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:12.045 Found net devices under 0000:86:00.0: cvl_0_0 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:12.045 Found net devices under 0000:86:00.1: cvl_0_1 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.045 17:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:34:12.045 00:34:12.045 --- 10.0.0.2 ping statistics --- 00:34:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.045 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:12.045 00:34:12.045 --- 10.0.0.1 ping statistics --- 00:34:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.045 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1314890 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1314890 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1314890 ']' 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.045 [2024-10-14 17:51:10.280680] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.045 [2024-10-14 17:51:10.281585] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:12.045 [2024-10-14 17:51:10.281623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.045 [2024-10-14 17:51:10.349054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.045 [2024-10-14 17:51:10.389081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.045 [2024-10-14 17:51:10.389117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.045 [2024-10-14 17:51:10.389126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.045 [2024-10-14 17:51:10.389132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.045 [2024-10-14 17:51:10.389138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.045 [2024-10-14 17:51:10.389690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.045 [2024-10-14 17:51:10.456550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.045 [2024-10-14 17:51:10.456773] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:12.045 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 [2024-10-14 17:51:10.534323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 Malloc0 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 [2024-10-14 17:51:10.606493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1315004 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1315004 /var/tmp/bdevperf.sock 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1315004 ']' 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:12.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 [2024-10-14 17:51:10.657552] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:12.046 [2024-10-14 17:51:10.657597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315004 ] 00:34:12.046 [2024-10-14 17:51:10.726749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.046 [2024-10-14 17:51:10.769209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.046 NVMe0n1 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.046 17:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:12.046 Running I/O for 10 seconds... 00:34:14.360 11927.00 IOPS, 46.59 MiB/s [2024-10-14T15:51:14.434Z] 11991.50 IOPS, 46.84 MiB/s [2024-10-14T15:51:15.371Z] 12104.67 IOPS, 47.28 MiB/s [2024-10-14T15:51:16.319Z] 12056.50 IOPS, 47.10 MiB/s [2024-10-14T15:51:17.261Z] 12096.20 IOPS, 47.25 MiB/s [2024-10-14T15:51:18.212Z] 12130.83 IOPS, 47.39 MiB/s [2024-10-14T15:51:19.148Z] 12181.43 IOPS, 47.58 MiB/s [2024-10-14T15:51:20.085Z] 12215.25 IOPS, 47.72 MiB/s [2024-10-14T15:51:21.463Z] 12241.11 IOPS, 47.82 MiB/s [2024-10-14T15:51:21.463Z] 12237.80 IOPS, 47.80 MiB/s 00:34:22.325 Latency(us) 00:34:22.325 [2024-10-14T15:51:21.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:22.325 Verification LBA range: start 0x0 length 0x4000 00:34:22.325 NVMe0n1 : 10.05 12264.52 47.91 0.00 0.00 83183.51 14730.00 51929.48 00:34:22.325 [2024-10-14T15:51:21.463Z] =================================================================================================================== 00:34:22.325 [2024-10-14T15:51:21.463Z] Total : 12264.52 47.91 0.00 0.00 83183.51 14730.00 51929.48 00:34:22.325 { 00:34:22.325 "results": [ 00:34:22.325 { 00:34:22.325 "job": "NVMe0n1", 00:34:22.325 "core_mask": "0x1", 00:34:22.325 "workload": "verify", 00:34:22.325 "status": "finished", 00:34:22.325 "verify_range": { 00:34:22.325 "start": 0, 00:34:22.325 "length": 16384 00:34:22.325 }, 00:34:22.325 "queue_depth": 1024, 00:34:22.325 "io_size": 4096, 00:34:22.325 "runtime": 10.053229, 00:34:22.326 "iops": 12264.517201388728, 00:34:22.326 "mibps": 47.90827031792472, 00:34:22.326 "io_failed": 0, 00:34:22.326 "io_timeout": 0, 00:34:22.326 "avg_latency_us": 83183.51245856534, 00:34:22.326 "min_latency_us": 14729.996190476191, 00:34:22.326 "max_latency_us": 51929.4780952381 00:34:22.326 } 00:34:22.326 ], 00:34:22.326 "core_count": 1 00:34:22.326 } 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1315004 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1315004 ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1315004 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1315004 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1315004' 00:34:22.326 killing process with pid 1315004 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1315004 00:34:22.326 Received shutdown signal, test time was about 10.000000 seconds 00:34:22.326 00:34:22.326 Latency(us) 00:34:22.326 [2024-10-14T15:51:21.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.326 [2024-10-14T15:51:21.464Z] =================================================================================================================== 00:34:22.326 [2024-10-14T15:51:21.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1315004 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.326 rmmod nvme_tcp 00:34:22.326 rmmod nvme_fabrics 00:34:22.326 rmmod nvme_keyring 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1314890 ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1314890 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1314890 ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1314890 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:22.326 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1314890 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1314890' 00:34:22.585 killing process with pid 1314890 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1314890 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1314890 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.585 17:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:25.121 00:34:25.121 real 0m19.601s 00:34:25.121 user 0m22.711s 00:34:25.121 sys 0m6.136s 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:25.121 ************************************ 00:34:25.121 END TEST nvmf_queue_depth 00:34:25.121 ************************************ 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:25.121 ************************************ 00:34:25.121 START TEST nvmf_target_multipath 00:34:25.121 ************************************ 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:25.121 * Looking for test storage... 00:34:25.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:25.121 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:25.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.122 --rc genhtml_branch_coverage=1 00:34:25.122 --rc genhtml_function_coverage=1 00:34:25.122 --rc genhtml_legend=1 00:34:25.122 --rc geninfo_all_blocks=1 00:34:25.122 --rc geninfo_unexecuted_blocks=1 00:34:25.122 00:34:25.122 ' 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:25.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.122 --rc genhtml_branch_coverage=1 00:34:25.122 --rc genhtml_function_coverage=1 00:34:25.122 --rc genhtml_legend=1 00:34:25.122 --rc geninfo_all_blocks=1 00:34:25.122 --rc geninfo_unexecuted_blocks=1 00:34:25.122 00:34:25.122 ' 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:25.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.122 --rc genhtml_branch_coverage=1 00:34:25.122 --rc genhtml_function_coverage=1 00:34:25.122 --rc genhtml_legend=1 00:34:25.122 --rc geninfo_all_blocks=1 00:34:25.122 --rc geninfo_unexecuted_blocks=1 00:34:25.122 00:34:25.122 ' 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:25.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.122 --rc genhtml_branch_coverage=1 00:34:25.122 --rc genhtml_function_coverage=1 00:34:25.122 --rc genhtml_legend=1 00:34:25.122 --rc geninfo_all_blocks=1 00:34:25.122 --rc geninfo_unexecuted_blocks=1 00:34:25.122 00:34:25.122 ' 00:34:25.122 17:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:25.122 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:25.123 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.123 17:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:31.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:31.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:31.698 Found net devices under 0000:86:00.0: cvl_0_0 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:31.698 Found net devices under 0000:86:00.1: cvl_0_1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.698 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:34:31.699 00:34:31.699 --- 10.0.0.2 ping statistics --- 00:34:31.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.699 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:34:31.699 00:34:31.699 --- 10.0.0.1 ping statistics --- 00:34:31.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.699 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:31.699 only one NIC for nvmf test 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:31.699 17:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:31.699 rmmod nvme_tcp 00:34:31.699 rmmod nvme_fabrics 00:34:31.699 rmmod nvme_keyring 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.699 17:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:33.078 00:34:33.078 real 0m8.324s 00:34:33.078 user 0m1.796s 00:34:33.078 sys 0m4.518s 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:33.078 ************************************ 00:34:33.078 END TEST nvmf_target_multipath 00:34:33.078 ************************************ 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:33.078 ************************************ 00:34:33.078 START TEST nvmf_zcopy 00:34:33.078 ************************************ 00:34:33.078 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:33.338 * Looking for test storage... 00:34:33.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:33.338 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.339 --rc genhtml_branch_coverage=1 00:34:33.339 --rc genhtml_function_coverage=1 00:34:33.339 --rc genhtml_legend=1 00:34:33.339 --rc geninfo_all_blocks=1 00:34:33.339 --rc geninfo_unexecuted_blocks=1 00:34:33.339 00:34:33.339 ' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.339 --rc genhtml_branch_coverage=1 00:34:33.339 --rc genhtml_function_coverage=1 00:34:33.339 --rc genhtml_legend=1 00:34:33.339 --rc geninfo_all_blocks=1 00:34:33.339 --rc geninfo_unexecuted_blocks=1 00:34:33.339 00:34:33.339 ' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.339 --rc genhtml_branch_coverage=1 00:34:33.339 --rc genhtml_function_coverage=1 00:34:33.339 --rc genhtml_legend=1 00:34:33.339 --rc geninfo_all_blocks=1 00:34:33.339 --rc geninfo_unexecuted_blocks=1 00:34:33.339 00:34:33.339 ' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.339 --rc genhtml_branch_coverage=1 00:34:33.339 --rc genhtml_function_coverage=1 00:34:33.339 --rc genhtml_legend=1 00:34:33.339 --rc geninfo_all_blocks=1 00:34:33.339 --rc geninfo_unexecuted_blocks=1 00:34:33.339 00:34:33.339 ' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:33.339 17:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:39.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:39.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:39.906 Found net devices under 0000:86:00.0: cvl_0_0 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:39.906 Found net devices under 0000:86:00.1: cvl_0_1 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:39.906 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:39.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:34:39.907 00:34:39.907 --- 10.0.0.2 ping statistics --- 00:34:39.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.907 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:34:39.907 00:34:39.907 --- 10.0.0.1 ping statistics --- 00:34:39.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.907 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1323663 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1323663 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1323663 ']' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 [2024-10-14 17:51:38.380254] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:39.907 [2024-10-14 17:51:38.381177] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:39.907 [2024-10-14 17:51:38.381209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.907 [2024-10-14 17:51:38.452548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.907 [2024-10-14 17:51:38.492929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.907 [2024-10-14 17:51:38.492964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.907 [2024-10-14 17:51:38.492971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.907 [2024-10-14 17:51:38.492976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.907 [2024-10-14 17:51:38.492982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.907 [2024-10-14 17:51:38.493514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.907 [2024-10-14 17:51:38.558508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.907 [2024-10-14 17:51:38.558754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 [2024-10-14 17:51:38.634166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 [2024-10-14 17:51:38.658387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 malloc0 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:39.907 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:39.907 { 00:34:39.907 "params": { 00:34:39.907 "name": "Nvme$subsystem", 00:34:39.907 "trtype": "$TEST_TRANSPORT", 00:34:39.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.908 "adrfam": "ipv4", 00:34:39.908 "trsvcid": "$NVMF_PORT", 00:34:39.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.908 "hdgst": ${hdgst:-false}, 00:34:39.908 "ddgst": ${ddgst:-false} 00:34:39.908 }, 00:34:39.908 "method": "bdev_nvme_attach_controller" 00:34:39.908 } 00:34:39.908 EOF 00:34:39.908 )") 00:34:39.908 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:34:39.908 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:34:39.908 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:34:39.908 17:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:39.908 "params": { 00:34:39.908 "name": "Nvme1", 00:34:39.908 "trtype": "tcp", 00:34:39.908 "traddr": "10.0.0.2", 00:34:39.908 "adrfam": "ipv4", 00:34:39.908 "trsvcid": "4420", 00:34:39.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:39.908 "hdgst": false, 00:34:39.908 "ddgst": false 00:34:39.908 }, 00:34:39.908 "method": "bdev_nvme_attach_controller" 00:34:39.908 }' 00:34:39.908 [2024-10-14 17:51:38.748998] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:39.908 [2024-10-14 17:51:38.749038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323798 ] 00:34:39.908 [2024-10-14 17:51:38.816961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.908 [2024-10-14 17:51:38.857678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.166 Running I/O for 10 seconds... 00:34:42.474 8426.00 IOPS, 65.83 MiB/s [2024-10-14T15:51:42.547Z] 8509.50 IOPS, 66.48 MiB/s [2024-10-14T15:51:43.482Z] 8550.00 IOPS, 66.80 MiB/s [2024-10-14T15:51:44.418Z] 8580.75 IOPS, 67.04 MiB/s [2024-10-14T15:51:45.353Z] 8588.60 IOPS, 67.10 MiB/s [2024-10-14T15:51:46.288Z] 8603.00 IOPS, 67.21 MiB/s [2024-10-14T15:51:47.223Z] 8603.86 IOPS, 67.22 MiB/s [2024-10-14T15:51:48.597Z] 8612.25 IOPS, 67.28 MiB/s [2024-10-14T15:51:49.533Z] 8619.89 IOPS, 67.34 MiB/s [2024-10-14T15:51:49.533Z] 8619.40 IOPS, 67.34 MiB/s 00:34:50.395 Latency(us) 00:34:50.395 [2024-10-14T15:51:49.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.395 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:50.395 Verification LBA range: start 0x0 length 0x1000 00:34:50.395 Nvme1n1 : 10.01 8623.41 67.37 0.00 0.00 14801.79 2605.84 21096.35 00:34:50.395 [2024-10-14T15:51:49.533Z] =================================================================================================================== 00:34:50.395 [2024-10-14T15:51:49.533Z] Total : 8623.41 67.37 0.00 0.00 14801.79 2605.84 21096.35 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1325407 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:50.395 { 00:34:50.395 "params": { 00:34:50.395 "name": "Nvme$subsystem", 00:34:50.395 "trtype": "$TEST_TRANSPORT", 00:34:50.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.395 "adrfam": "ipv4", 00:34:50.395 "trsvcid": "$NVMF_PORT", 00:34:50.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.395 "hdgst": ${hdgst:-false}, 00:34:50.395 "ddgst": ${ddgst:-false} 00:34:50.395 }, 00:34:50.395 "method": "bdev_nvme_attach_controller" 00:34:50.395 } 00:34:50.395 EOF 00:34:50.395 )") 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:34:50.395 [2024-10-14 17:51:49.369849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.395 [2024-10-14 17:51:49.369879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:34:50.395 17:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:50.395 "params": { 00:34:50.395 "name": "Nvme1", 00:34:50.395 "trtype": "tcp", 00:34:50.395 "traddr": "10.0.0.2", 00:34:50.395 "adrfam": "ipv4", 00:34:50.395 "trsvcid": "4420", 00:34:50.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.396 "hdgst": false, 00:34:50.396 "ddgst": false 00:34:50.396 }, 00:34:50.396 "method": "bdev_nvme_attach_controller" 00:34:50.396 }' 00:34:50.396 [2024-10-14 17:51:49.381813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.381825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.393809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.393817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.405808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.405817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.409789] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:34:50.396 [2024-10-14 17:51:49.409841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325407 ] 00:34:50.396 [2024-10-14 17:51:49.417809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.417821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.429807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.429816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.441811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.441820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.453810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.453818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.465809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.465818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.476498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.396 [2024-10-14 17:51:49.477809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.477818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.489808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.489821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.501809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.501819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.513809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.513818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.396 [2024-10-14 17:51:49.516748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.396 [2024-10-14 17:51:49.525809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.396 [2024-10-14 17:51:49.525820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.537828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.537853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.549814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.549827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.561810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.561822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.573811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.573822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.585808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.585818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.597818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.597835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.609814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.609829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.621817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.621830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.633817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.633829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.645820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.645829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.654 [2024-10-14 17:51:49.657809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.654 [2024-10-14 17:51:49.657817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.669812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.669824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.681815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.681829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.693810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.693820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.705810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.705820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.717811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.717821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.729812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.729827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.741821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.741831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.753809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.753818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.765813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.765825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.777810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.777820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.655 [2024-10-14 17:51:49.789807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.655 [2024-10-14 17:51:49.789817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.913 [2024-10-14 17:51:49.801809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.801819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.813815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.813829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.825812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.825827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 Running I/O for 5 seconds... 00:34:50.914 [2024-10-14 17:51:49.838710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.838729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.854116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.854133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.869725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.869744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.883213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.883232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.893490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.893507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.907561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.907579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.922301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.922320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.937186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.937204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.951885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.951903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.966932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.966949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.982016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.982034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:49.993393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:49.993411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:50.009077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:50.009097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:50.023285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:50.023308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:50.039151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:50.039169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.914 [2024-10-14 17:51:50.054327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.914 [2024-10-14 17:51:50.054349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.070104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.070123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.085921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.085941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.099886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.099904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.115009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.115027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.129629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.129648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.143709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.143727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.158297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.158315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.169732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.169751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.172 [2024-10-14 17:51:50.184219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.172 [2024-10-14 17:51:50.184236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.199098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.199116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.213737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.213757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.226172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.226191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.239710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.239728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.254616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.254633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.270076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.270093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.281581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.281599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.295767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.295796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.173 [2024-10-14 17:51:50.310466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.173 [2024-10-14 17:51:50.310483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.431 [2024-10-14 17:51:50.325242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.431 [2024-10-14 17:51:50.325261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.431 [2024-10-14 17:51:50.338757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.431 [2024-10-14 17:51:50.338775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.431 [2024-10-14 17:51:50.353784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.431 [2024-10-14 17:51:50.353802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.431 [2024-10-14 17:51:50.365128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.365145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.380123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.380141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.394954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.394972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.409546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.409563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.423089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.423107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.437669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.437687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.450093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.450110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.463865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.463883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.478640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.478657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.493378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.493396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.507812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.507830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.522299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.522316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.537482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.537500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.551650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.551669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.432 [2024-10-14 17:51:50.566254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.432 [2024-10-14 17:51:50.566277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.578554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.578572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.593838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.593855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.607272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.607289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.618481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.618497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.634045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.634063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.646677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.646694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.659280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.659298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.674445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.674462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.689616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.689633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.703217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.703233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.717802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.717819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.728866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.728883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.743725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.743743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.758299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.758316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.773652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.773669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.786982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.786999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.801921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.801940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.813180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.813197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.691 [2024-10-14 17:51:50.828120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.691 [2024-10-14 17:51:50.828138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 16602.00 IOPS, 129.70 MiB/s [2024-10-14T15:51:51.088Z] [2024-10-14 17:51:50.842772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.842789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.858107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.858125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.870363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.870380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.883413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.883431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.897810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.897827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.911175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.911192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.926563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.926581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.942655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.942672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.957414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.957431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.971854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.971871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:50.986311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:50.986329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.002435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.002452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.013650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.013668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.027377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.027394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.038189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.038206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.051501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.051518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.066242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.066259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.950 [2024-10-14 17:51:51.078837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.950 [2024-10-14 17:51:51.078854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.091631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.091650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.106537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.106554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.121513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.121532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.135078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.135096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.146265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.146282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.159579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.159596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.174839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.174857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.189371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.189389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.209 [2024-10-14 17:51:51.201539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.209 [2024-10-14 17:51:51.201557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.215659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.215677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.230794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.230812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.246659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.246680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.261459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.261478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.275597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.275621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.290407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.290425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.305935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.305953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.319832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.319849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.334316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.334333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.210 [2024-10-14 17:51:51.346483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.210 [2024-10-14 17:51:51.346500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.359697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.359715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.374009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.374028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.384509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.384527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.399110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.399127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.413646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.413664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.425968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.425986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.439495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.439512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.449734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.449752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.463520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.463538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.478212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.478230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.493737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.493755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.507291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.507309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.521872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.521890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.534596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.534626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.547050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.547068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.558548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.558566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.570952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.570969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.583268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.583285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.468 [2024-10-14 17:51:51.598130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.468 [2024-10-14 17:51:51.598151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.613698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.613716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.628188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.628205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.642339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.642356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.657640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.657658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.671664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.671681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.686383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.686400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.701870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.701888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.714905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.714922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.727381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.727398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.742245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.742261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.758097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.758114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.769517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.769535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.783619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.783638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.798471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.798489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.813565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.813583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.826867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.826884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 16727.00 IOPS, 130.68 MiB/s [2024-10-14T15:51:51.865Z] [2024-10-14 17:51:51.839197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.839214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.727 [2024-10-14 17:51:51.854127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.727 [2024-10-14 17:51:51.854144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.869353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.869376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.883556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.883573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.898069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.898086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.909436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.909453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.923629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.923647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.938638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.938655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.953700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.953717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.967819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.967836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.982574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.982591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:51.997758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:51.997776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:52.011423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:52.011440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:52.026494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:52.026512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:52.041383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:52.041400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:52.055884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:52.055901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.986 [2024-10-14 17:51:52.069901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.986 [2024-10-14 17:51:52.069919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.987 [2024-10-14 17:51:52.082169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.987 [2024-10-14 17:51:52.082185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.987 [2024-10-14 17:51:52.095576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.987 [2024-10-14 17:51:52.095593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.987 [2024-10-14 17:51:52.110133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.987 [2024-10-14 17:51:52.110150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.987 [2024-10-14 17:51:52.125266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.987 [2024-10-14 17:51:52.125283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.139881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.139903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.154242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.154258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.169829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.169846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.183202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.183219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.197898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.197915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.210614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.210631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.226291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.226309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.238586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.238609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.251555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.251572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.266169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.266186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.278674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.278693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.291672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.291694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.306315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.306332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.321960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.321977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.334763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.334781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.347109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.347126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.361692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.361710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.245 [2024-10-14 17:51:52.374401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.245 [2024-10-14 17:51:52.374417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.387591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.387617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.402124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.402141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.417629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.417646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.431410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.431427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.445638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.445656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.458367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.458384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.471776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.471793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.486702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.486719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.501872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.501890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.514787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.514804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.529633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.529650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.543690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.543707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.558419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.558436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.573677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.573696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.587739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.587759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.602558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.602577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.617623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.617642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.504 [2024-10-14 17:51:52.632222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.504 [2024-10-14 17:51:52.632240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.647112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.647130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.661974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.661991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.672420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.672437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.687073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.687090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.701414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.701432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.715277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.715296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.730247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.730264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.743316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.743334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.758824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.758842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.773520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.773538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.787669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.787688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.802430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.802447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.817613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.817631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.830645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.830661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 16777.67 IOPS, 131.08 MiB/s [2024-10-14T15:51:52.902Z] [2024-10-14 17:51:52.843523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.843541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.858121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.858139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.874483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.874500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.886416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.886433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:53.764 [2024-10-14 17:51:52.899205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:53.764 [2024-10-14 17:51:52.899223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.914210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.914228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.926746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.926768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.941255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.941273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.954632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.954650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.969812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.969830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.984030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.984047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:52.998759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:52.998777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.013639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.013657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.027833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.027851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.042454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.042471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.053698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.053715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.067320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.067338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.082245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.082261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.097899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.097916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.111681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.111697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.126634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.126651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.142056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.142074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.023 [2024-10-14 17:51:53.154391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.023 [2024-10-14 17:51:53.154408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.167467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.167485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.182099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.182116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.194841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.194861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.210462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.210479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.226245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.226262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.241362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.241378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.255567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.255584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.270257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.270274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.282511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.282528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.297334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.297353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.310995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.311013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.322319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.304 [2024-10-14 17:51:53.322335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.304 [2024-10-14 17:51:53.335483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.335501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.350173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.350189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.365611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.365628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.379883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.379900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.394727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.394744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.409415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.409432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.305 [2024-10-14 17:51:53.423194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.305 [2024-10-14 17:51:53.423211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.438455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.438473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.450046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.450063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.463407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.463428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.478743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.478762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.493629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.493648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.507570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.507589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.522334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.522352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.537564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.537581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.550916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.550934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.563494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.563511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.578705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.578723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.593968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.593985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.607608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.607625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.621934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.621951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.632487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.632503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.646676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.646693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.661914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.661931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.674434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.674451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.687644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.687661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.702265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.702282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.717534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.717551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.732167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.732188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.614 [2024-10-14 17:51:53.746951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.614 [2024-10-14 17:51:53.746969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.761950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.761969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.774172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.774188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.787999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.788017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.802843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.802861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.817328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.817345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.832052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.832070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 16779.50 IOPS, 131.09 MiB/s [2024-10-14T15:51:54.025Z] [2024-10-14 17:51:53.846557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.846574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.862002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.862020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.874185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.874201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.887442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.887459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.902497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.902514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.917707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.917725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.930514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.930531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.943076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.943094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.953892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.953909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.967869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.967887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.982522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.982539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:53.997133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:53.997151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:54.887 [2024-10-14 17:51:54.011370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:54.887 [2024-10-14 17:51:54.011390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.026166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.026185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.037374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.037393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.052126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.052145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.066626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.066644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.081538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.081555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.092546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.092563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.107088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.107105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.121822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.121840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.132947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.132964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.147707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.147725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.162338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.162355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.177405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.177422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.191794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.191812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.206656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.206673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.221467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.221485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.235285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.235302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.250111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.250128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.261433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.261451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.275969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.275988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.290570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.290587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.183 [2024-10-14 17:51:54.305161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.183 [2024-10-14 17:51:54.305178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.318856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.318875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.334235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.334252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.350204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.350222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.366137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.366154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.379114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.379132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.394207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.394228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.405784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.405800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.419481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.419498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.434416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.434434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.450182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.450199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.462107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.462124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.475591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.475615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.490451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.490468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.505914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.505931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.518334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.518351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.531014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.442 [2024-10-14 17:51:54.531031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.442 [2024-10-14 17:51:54.545461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.443 [2024-10-14 17:51:54.545478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.443 [2024-10-14 17:51:54.556743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.443 [2024-10-14 17:51:54.556760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.443 [2024-10-14 17:51:54.571635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.443 [2024-10-14 17:51:54.571652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.585969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.585988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.597853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.597870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.611727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.611743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.626533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.626550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.641358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.641376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.655772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.655789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.670171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.670187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.682252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.682269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.695959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.695976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.710269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.710285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.725584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.725607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.739809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.739826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.754268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.754285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.769768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.769785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.782411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.782432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.795763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.795780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.810336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.810352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.825333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.825351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.701 [2024-10-14 17:51:54.839504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.701 [2024-10-14 17:51:54.839522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.960 16781.00 IOPS, 131.10 MiB/s 00:34:55.960 Latency(us) 00:34:55.960 [2024-10-14T15:51:55.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.960 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:55.960 Nvme1n1 : 5.00 16791.31 131.18 0.00 0.00 7617.41 1997.29 13232.03 00:34:55.960 [2024-10-14T15:51:55.098Z] =================================================================================================================== 00:34:55.960 [2024-10-14T15:51:55.098Z] Total : 16791.31 131.18 0.00 0.00 7617.41 1997.29 13232.03 00:34:55.960 [2024-10-14 17:51:54.849817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.960 [2024-10-14 17:51:54.849833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.861816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.861831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.873822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.873839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.885818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.885839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.897818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.897831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.909811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.909823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.921811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.921825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.933813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.933827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.945812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.945826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.957809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.957819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.969808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.969817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.981813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.981833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:54.993814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:54.993825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 [2024-10-14 17:51:55.005809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.961 [2024-10-14 17:51:55.005818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1325407) - No such process 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1325407 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:55.961 delay0 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.961 17:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:56.220 [2024-10-14 17:51:55.141300] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:04.371 [2024-10-14 17:52:02.259157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58da40 is same with the state(6) to be set 00:35:04.371 [2024-10-14 17:52:02.259195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x58da40 is same with the state(6) to be set 00:35:04.371 Initializing NVMe Controllers 00:35:04.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:04.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:04.371 Initialization complete. Launching workers. 00:35:04.371 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 21378 00:35:04.371 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21545, failed to submit 97 00:35:04.371 success 21471, unsuccessful 74, failed 0 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:04.371 rmmod nvme_tcp 00:35:04.371 rmmod nvme_fabrics 00:35:04.371 rmmod nvme_keyring 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1323663 ']' 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1323663 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1323663 ']' 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1323663 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:35:04.371 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1323663 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1323663' 00:35:04.372 killing process with pid 1323663 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1323663 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1323663 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.372 17:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.750 00:35:05.750 real 0m32.411s 00:35:05.750 user 0m41.824s 00:35:05.750 sys 0m13.113s 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:05.750 ************************************ 00:35:05.750 END TEST nvmf_zcopy 00:35:05.750 ************************************ 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:05.750 ************************************ 00:35:05.750 START TEST nvmf_nmic 00:35:05.750 ************************************ 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:05.750 * Looking for test storage... 00:35:05.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.750 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.751 --rc genhtml_branch_coverage=1 00:35:05.751 --rc genhtml_function_coverage=1 00:35:05.751 --rc genhtml_legend=1 00:35:05.751 --rc geninfo_all_blocks=1 00:35:05.751 --rc geninfo_unexecuted_blocks=1 00:35:05.751 00:35:05.751 ' 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.751 --rc genhtml_branch_coverage=1 00:35:05.751 --rc genhtml_function_coverage=1 00:35:05.751 --rc genhtml_legend=1 00:35:05.751 --rc geninfo_all_blocks=1 00:35:05.751 --rc geninfo_unexecuted_blocks=1 00:35:05.751 00:35:05.751 ' 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.751 --rc genhtml_branch_coverage=1 00:35:05.751 --rc genhtml_function_coverage=1 00:35:05.751 --rc genhtml_legend=1 00:35:05.751 --rc geninfo_all_blocks=1 00:35:05.751 --rc geninfo_unexecuted_blocks=1 00:35:05.751 00:35:05.751 ' 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:05.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.751 --rc genhtml_branch_coverage=1 00:35:05.751 --rc genhtml_function_coverage=1 00:35:05.751 --rc genhtml_legend=1 00:35:05.751 --rc geninfo_all_blocks=1 00:35:05.751 --rc geninfo_unexecuted_blocks=1 00:35:05.751 00:35:05.751 ' 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.751 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.010 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:06.011 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:06.011 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.011 17:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:12.582 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:12.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:12.582 Found net devices under 0000:86:00.0: cvl_0_0 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:12.582 Found net devices under 0000:86:00.1: cvl_0_1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:35:12.582 00:35:12.582 --- 10.0.0.2 ping statistics --- 00:35:12.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.582 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:35:12.582 00:35:12.582 --- 10.0.0.1 ping statistics --- 00:35:12.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.582 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1330989 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1330989 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1330989 ']' 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.582 17:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 [2024-10-14 17:52:10.828704] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:12.582 [2024-10-14 17:52:10.829689] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:35:12.582 [2024-10-14 17:52:10.829726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.582 [2024-10-14 17:52:10.904225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:12.582 [2024-10-14 17:52:10.947723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.582 [2024-10-14 17:52:10.947759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.582 [2024-10-14 17:52:10.947766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.582 [2024-10-14 17:52:10.947772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.582 [2024-10-14 17:52:10.947778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.582 [2024-10-14 17:52:10.949382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.582 [2024-10-14 17:52:10.949490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.582 [2024-10-14 17:52:10.949599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.582 [2024-10-14 17:52:10.949613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:12.582 [2024-10-14 17:52:11.017090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:12.582 [2024-10-14 17:52:11.018368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:12.582 [2024-10-14 17:52:11.018533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.582 [2024-10-14 17:52:11.019009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:12.582 [2024-10-14 17:52:11.019053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 [2024-10-14 17:52:11.086352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.582 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.582 Malloc0 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 [2024-10-14 17:52:11.170542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:12.583 test case1: single bdev can't be used in multiple subsystems 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 [2024-10-14 17:52:11.206036] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:12.583 [2024-10-14 17:52:11.206056] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:12.583 [2024-10-14 17:52:11.206067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:12.583 request: 00:35:12.583 { 00:35:12.583 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:12.583 "namespace": { 00:35:12.583 "bdev_name": "Malloc0", 00:35:12.583 "no_auto_visible": false 00:35:12.583 }, 00:35:12.583 "method": "nvmf_subsystem_add_ns", 00:35:12.583 "req_id": 1 00:35:12.583 } 00:35:12.583 Got JSON-RPC error response 00:35:12.583 response: 00:35:12.583 { 00:35:12.583 "code": -32602, 00:35:12.583 "message": "Invalid parameters" 00:35:12.583 } 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:12.583 Adding namespace failed - expected result. 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:12.583 test case2: host connect to nvmf target in multiple paths 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:12.583 [2024-10-14 17:52:11.218133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:12.583 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:12.842 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:12.842 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:35:12.842 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:12.842 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:12.842 17:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:35:14.745 17:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:14.745 [global] 00:35:14.745 thread=1 00:35:14.745 invalidate=1 00:35:14.745 rw=write 00:35:14.745 time_based=1 00:35:14.745 runtime=1 00:35:14.745 ioengine=libaio 00:35:14.745 direct=1 00:35:14.745 bs=4096 00:35:14.745 iodepth=1 00:35:14.745 norandommap=0 00:35:14.745 numjobs=1 00:35:14.745 00:35:14.745 verify_dump=1 00:35:14.745 verify_backlog=512 00:35:14.745 verify_state_save=0 00:35:14.745 do_verify=1 00:35:14.745 verify=crc32c-intel 00:35:14.745 [job0] 00:35:14.745 filename=/dev/nvme0n1 00:35:14.745 Could not set queue depth (nvme0n1) 00:35:15.003 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:15.003 fio-3.35 00:35:15.003 Starting 1 thread 00:35:16.379 00:35:16.379 job0: (groupid=0, jobs=1): err= 0: pid=1331606: Mon Oct 14 17:52:15 2024 00:35:16.379 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:35:16.379 slat (nsec): min=7002, max=41669, avg=8100.03, stdev=1709.19 00:35:16.379 clat (usec): min=171, max=526, avg=205.40, stdev=17.16 00:35:16.379 lat (usec): min=191, max=558, avg=213.50, stdev=17.44 00:35:16.379 clat percentiles (usec): 00:35:16.379 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:35:16.379 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:35:16.379 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 217], 95.00th=[ 223], 00:35:16.379 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 392], 99.95th=[ 400], 00:35:16.379 | 99.99th=[ 529] 00:35:16.379 write: IOPS=2838, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:35:16.379 slat (nsec): min=9766, max=44517, avg=10924.82, stdev=1788.72 00:35:16.379 clat (usec): min=123, max=3896, avg=143.20, stdev=74.77 00:35:16.379 lat (usec): min=135, max=3908, avg=154.13, stdev=74.89 00:35:16.379 clat percentiles (usec): 00:35:16.379 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 133], 00:35:16.379 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:35:16.379 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 233], 00:35:16.379 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 371], 99.95th=[ 424], 00:35:16.379 | 99.99th=[ 3884] 00:35:16.379 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:35:16.379 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:35:16.379 lat (usec) : 250=98.78%, 500=1.18%, 750=0.02% 00:35:16.379 lat (msec) : 4=0.02% 00:35:16.379 cpu : usr=4.60%, sys=8.20%, ctx=5401, majf=0, minf=1 00:35:16.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.379 issued rwts: total=2560,2841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:16.379 00:35:16.379 Run status group 0 (all jobs): 00:35:16.379 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:35:16.379 WRITE: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:35:16.379 00:35:16.379 Disk stats (read/write): 00:35:16.379 nvme0n1: ios=2344/2560, merge=0/0, ticks=671/342, in_queue=1013, util=95.39% 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:16.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.379 rmmod nvme_tcp 00:35:16.379 rmmod nvme_fabrics 00:35:16.379 rmmod nvme_keyring 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1330989 ']' 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1330989 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1330989 ']' 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1330989 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.379 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330989 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330989' 00:35:16.638 killing process with pid 1330989 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1330989 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1330989 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.638 17:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.172 00:35:19.172 real 0m13.122s 00:35:19.172 user 0m24.404s 00:35:19.172 sys 0m6.162s 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.172 ************************************ 00:35:19.172 END TEST nvmf_nmic 00:35:19.172 ************************************ 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.172 ************************************ 00:35:19.172 START TEST nvmf_fio_target 00:35:19.172 ************************************ 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:19.172 * Looking for test storage... 00:35:19.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:19.172 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:35:19.173 17:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.173 --rc genhtml_branch_coverage=1 00:35:19.173 --rc genhtml_function_coverage=1 00:35:19.173 --rc genhtml_legend=1 00:35:19.173 --rc geninfo_all_blocks=1 00:35:19.173 --rc geninfo_unexecuted_blocks=1 00:35:19.173 00:35:19.173 ' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.173 --rc genhtml_branch_coverage=1 00:35:19.173 --rc genhtml_function_coverage=1 00:35:19.173 --rc genhtml_legend=1 00:35:19.173 --rc geninfo_all_blocks=1 00:35:19.173 --rc geninfo_unexecuted_blocks=1 00:35:19.173 00:35:19.173 ' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.173 --rc genhtml_branch_coverage=1 00:35:19.173 --rc genhtml_function_coverage=1 00:35:19.173 --rc genhtml_legend=1 00:35:19.173 --rc geninfo_all_blocks=1 00:35:19.173 --rc geninfo_unexecuted_blocks=1 00:35:19.173 00:35:19.173 ' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:19.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.173 --rc genhtml_branch_coverage=1 00:35:19.173 --rc genhtml_function_coverage=1 00:35:19.173 --rc genhtml_legend=1 00:35:19.173 --rc geninfo_all_blocks=1 00:35:19.173 --rc geninfo_unexecuted_blocks=1 00:35:19.173 00:35:19.173 ' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:19.173 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.174 17:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.746 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:25.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:25.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:25.747 Found net devices under 0000:86:00.0: cvl_0_0 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:25.747 Found net devices under 0000:86:00.1: cvl_0_1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:35:25.747 00:35:25.747 --- 10.0.0.2 ping statistics --- 00:35:25.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.747 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:35:25.747 00:35:25.747 --- 10.0.0.1 ping statistics --- 00:35:25.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.747 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:25.747 17:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1335355 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1335355 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1335355 ']' 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.747 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.747 [2024-10-14 17:52:24.060256] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:25.748 [2024-10-14 17:52:24.061193] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:35:25.748 [2024-10-14 17:52:24.061228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.748 [2024-10-14 17:52:24.134022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:25.748 [2024-10-14 17:52:24.176581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.748 [2024-10-14 17:52:24.176622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.748 [2024-10-14 17:52:24.176629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.748 [2024-10-14 17:52:24.176635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.748 [2024-10-14 17:52:24.176641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.748 [2024-10-14 17:52:24.178070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.748 [2024-10-14 17:52:24.178178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.748 [2024-10-14 17:52:24.178268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:25.748 [2024-10-14 17:52:24.178269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.748 [2024-10-14 17:52:24.246046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:25.748 [2024-10-14 17:52:24.247346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:25.748 [2024-10-14 17:52:24.248039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:25.748 [2024-10-14 17:52:24.248196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:25.748 [2024-10-14 17:52:24.248245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:25.748 [2024-10-14 17:52:24.483036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:25.748 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:26.007 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:26.007 17:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:26.265 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:26.266 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:26.266 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:26.266 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:26.525 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:26.784 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:26.784 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.044 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:27.044 17:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:27.044 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:27.044 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:27.303 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:27.562 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:27.562 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:27.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:27.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:27.821 17:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.080 [2024-10-14 17:52:27.094950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.080 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:28.339 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:35:28.598 17:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:35:31.165 17:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:31.165 [global] 00:35:31.165 thread=1 00:35:31.165 invalidate=1 00:35:31.165 rw=write 00:35:31.165 time_based=1 00:35:31.165 runtime=1 00:35:31.165 ioengine=libaio 00:35:31.165 direct=1 00:35:31.165 bs=4096 00:35:31.165 iodepth=1 00:35:31.165 norandommap=0 00:35:31.165 numjobs=1 00:35:31.165 00:35:31.165 verify_dump=1 00:35:31.165 verify_backlog=512 00:35:31.165 verify_state_save=0 00:35:31.165 do_verify=1 00:35:31.165 verify=crc32c-intel 00:35:31.165 [job0] 00:35:31.165 filename=/dev/nvme0n1 00:35:31.165 [job1] 00:35:31.165 filename=/dev/nvme0n2 00:35:31.165 [job2] 00:35:31.165 filename=/dev/nvme0n3 00:35:31.165 [job3] 00:35:31.165 filename=/dev/nvme0n4 00:35:31.165 Could not set queue depth (nvme0n1) 00:35:31.165 Could not set queue depth (nvme0n2) 00:35:31.165 Could not set queue depth (nvme0n3) 00:35:31.165 Could not set queue depth (nvme0n4) 00:35:31.165 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.165 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.165 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.165 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:31.165 fio-3.35 00:35:31.165 Starting 4 threads 00:35:32.544 00:35:32.544 job0: (groupid=0, jobs=1): err= 0: pid=1336474: Mon Oct 14 17:52:31 2024 00:35:32.544 read: IOPS=25, BW=104KiB/s (106kB/s)(108KiB/1039msec) 00:35:32.544 slat (nsec): min=7035, max=26674, avg=20744.04, stdev=5496.75 00:35:32.544 clat (usec): min=194, max=41248, avg=34946.51, stdev=14744.04 00:35:32.544 lat (usec): min=204, max=41257, avg=34967.26, stdev=14747.45 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 219], 20.00th=[40633], 00:35:32.544 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:32.544 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:32.544 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:32.544 | 99.99th=[41157] 00:35:32.544 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:35:32.544 slat (nsec): min=9966, max=44045, avg=11417.49, stdev=2332.58 00:35:32.544 clat (usec): min=142, max=315, avg=169.48, stdev=28.67 00:35:32.544 lat (usec): min=152, max=359, avg=180.90, stdev=29.47 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:35:32.544 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:35:32.544 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 237], 95.00th=[ 241], 00:35:32.544 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 318], 99.95th=[ 318], 00:35:32.544 | 99.99th=[ 318] 00:35:32.544 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:32.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:32.544 lat (usec) : 250=94.25%, 500=1.48% 00:35:32.544 lat (msec) : 50=4.27% 00:35:32.544 cpu : usr=0.48%, sys=0.77%, ctx=539, majf=0, minf=1 00:35:32.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.544 job1: (groupid=0, jobs=1): err= 0: pid=1336475: Mon Oct 14 17:52:31 2024 00:35:32.544 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:35:32.544 slat (nsec): min=10269, max=23857, avg=22779.04, stdev=2765.01 00:35:32.544 clat (usec): min=40786, max=41832, avg=41023.65, stdev=192.74 00:35:32.544 lat (usec): min=40810, max=41855, avg=41046.43, stdev=191.96 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:32.544 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:32.544 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:32.544 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:32.544 | 99.99th=[41681] 00:35:32.544 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:35:32.544 slat (nsec): min=9764, max=44478, avg=11054.37, stdev=2287.91 00:35:32.544 clat (usec): min=141, max=270, avg=159.50, stdev=10.74 00:35:32.544 lat (usec): min=152, max=306, avg=170.56, stdev=11.56 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:35:32.544 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:35:32.544 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 172], 00:35:32.544 | 99.00th=[ 186], 99.50th=[ 239], 99.90th=[ 269], 99.95th=[ 269], 00:35:32.544 | 99.99th=[ 269] 00:35:32.544 bw ( KiB/s): min= 4096, max= 4096, per=34.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:32.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:32.544 lat (usec) : 250=95.33%, 500=0.37% 00:35:32.544 lat (msec) : 50=4.30% 00:35:32.544 cpu : usr=0.29%, sys=0.58%, ctx=537, majf=0, minf=1 00:35:32.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.544 job2: (groupid=0, jobs=1): err= 0: pid=1336477: Mon Oct 14 17:52:31 2024 00:35:32.544 read: IOPS=512, BW=2050KiB/s (2100kB/s)(2116KiB/1032msec) 00:35:32.544 slat (nsec): min=7703, max=25029, avg=9043.30, stdev=2706.89 00:35:32.544 clat (usec): min=211, max=41063, avg=1560.43, stdev=7189.37 00:35:32.544 lat (usec): min=219, max=41087, avg=1569.47, stdev=7191.90 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 243], 00:35:32.544 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 247], 00:35:32.544 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 441], 00:35:32.544 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:32.544 | 99.99th=[41157] 00:35:32.544 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:35:32.544 slat (usec): min=10, max=990, avg=14.38, stdev=35.14 00:35:32.544 clat (usec): min=126, max=346, avg=173.79, stdev=40.34 00:35:32.544 lat (usec): min=138, max=1302, avg=188.17, stdev=57.69 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:35:32.544 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 169], 00:35:32.544 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 239], 00:35:32.544 | 99.00th=[ 269], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 347], 00:35:32.544 | 99.99th=[ 347] 00:35:32.544 bw ( KiB/s): min= 8192, max= 8192, per=69.27%, avg=8192.00, stdev= 0.00, samples=1 00:35:32.544 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:32.544 lat (usec) : 250=89.89%, 500=8.95%, 750=0.06% 00:35:32.544 lat (msec) : 50=1.09% 00:35:32.544 cpu : usr=0.87%, sys=3.01%, ctx=1556, majf=0, minf=1 00:35:32.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.544 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.544 job3: (groupid=0, jobs=1): err= 0: pid=1336478: Mon Oct 14 17:52:31 2024 00:35:32.544 read: IOPS=556, BW=2228KiB/s (2281kB/s)(2308KiB/1036msec) 00:35:32.544 slat (nsec): min=6598, max=27544, avg=7963.46, stdev=2889.31 00:35:32.544 clat (usec): min=194, max=41386, avg=1445.26, stdev=6878.25 00:35:32.544 lat (usec): min=203, max=41394, avg=1453.23, stdev=6878.59 00:35:32.544 clat percentiles (usec): 00:35:32.544 | 1.00th=[ 202], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:35:32.544 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:35:32.544 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 285], 00:35:32.545 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:32.545 | 99.99th=[41157] 00:35:32.545 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:35:32.545 slat (nsec): min=9290, max=37767, avg=10494.73, stdev=1860.14 00:35:32.545 clat (usec): min=121, max=517, avg=178.23, stdev=43.33 00:35:32.545 lat (usec): min=131, max=555, avg=188.72, stdev=43.67 00:35:32.545 clat percentiles (usec): 00:35:32.545 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:35:32.545 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 182], 00:35:32.545 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 245], 00:35:32.545 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 519], 00:35:32.545 | 99.99th=[ 519] 00:35:32.545 bw ( KiB/s): min= 8192, max= 8192, per=69.27%, avg=8192.00, stdev= 0.00, samples=1 00:35:32.545 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:32.545 lat (usec) : 250=87.38%, 500=11.49%, 750=0.06% 00:35:32.545 lat (msec) : 50=1.06% 00:35:32.545 cpu : usr=0.87%, sys=1.35%, ctx=1601, majf=0, minf=2 00:35:32.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.545 issued rwts: total=577,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:32.545 00:35:32.545 Run status group 0 (all jobs): 00:35:32.545 READ: bw=4450KiB/s (4557kB/s), 88.8KiB/s-2228KiB/s (90.9kB/s-2281kB/s), io=4624KiB (4735kB), run=1032-1039msec 00:35:32.545 WRITE: bw=11.5MiB/s (12.1MB/s), 1971KiB/s-3969KiB/s (2018kB/s-4064kB/s), io=12.0MiB (12.6MB), run=1032-1039msec 00:35:32.545 00:35:32.545 Disk stats (read/write): 00:35:32.545 nvme0n1: ios=71/512, merge=0/0, ticks=766/80, in_queue=846, util=86.47% 00:35:32.545 nvme0n2: ios=47/512, merge=0/0, ticks=1726/76, in_queue=1802, util=97.86% 00:35:32.545 nvme0n3: ios=581/1024, merge=0/0, ticks=810/164, in_queue=974, util=100.00% 00:35:32.545 nvme0n4: ios=584/1024, merge=0/0, ticks=810/177, in_queue=987, util=90.75% 00:35:32.545 17:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:32.545 [global] 00:35:32.545 thread=1 00:35:32.545 invalidate=1 00:35:32.545 rw=randwrite 00:35:32.545 time_based=1 00:35:32.545 runtime=1 00:35:32.545 ioengine=libaio 00:35:32.545 direct=1 00:35:32.545 bs=4096 00:35:32.545 iodepth=1 00:35:32.545 norandommap=0 00:35:32.545 numjobs=1 00:35:32.545 00:35:32.545 verify_dump=1 00:35:32.545 verify_backlog=512 00:35:32.545 verify_state_save=0 00:35:32.545 do_verify=1 00:35:32.545 verify=crc32c-intel 00:35:32.545 [job0] 00:35:32.545 filename=/dev/nvme0n1 00:35:32.545 [job1] 00:35:32.545 filename=/dev/nvme0n2 00:35:32.545 [job2] 00:35:32.545 filename=/dev/nvme0n3 00:35:32.545 [job3] 00:35:32.545 filename=/dev/nvme0n4 00:35:32.545 Could not set queue depth (nvme0n1) 00:35:32.545 Could not set queue depth (nvme0n2) 00:35:32.545 Could not set queue depth (nvme0n3) 00:35:32.545 Could not set queue depth (nvme0n4) 00:35:32.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.804 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.804 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.804 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:32.804 fio-3.35 00:35:32.804 Starting 4 threads 00:35:34.181 00:35:34.181 job0: (groupid=0, jobs=1): err= 0: pid=1336850: Mon Oct 14 17:52:32 2024 00:35:34.181 read: IOPS=1980, BW=7920KiB/s (8110kB/s)(7928KiB/1001msec) 00:35:34.181 slat (nsec): min=6394, max=27638, avg=7409.20, stdev=1224.40 00:35:34.181 clat (usec): min=175, max=41191, avg=323.64, stdev=2238.52 00:35:34.181 lat (usec): min=182, max=41214, avg=331.05, stdev=2239.22 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 184], 00:35:34.181 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:35:34.181 | 70.00th=[ 194], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 251], 00:35:34.181 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[41157], 99.95th=[41157], 00:35:34.181 | 99.99th=[41157] 00:35:34.181 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:35:34.181 slat (nsec): min=9030, max=43209, avg=10606.79, stdev=2775.37 00:35:34.181 clat (usec): min=122, max=388, avg=153.11, stdev=39.01 00:35:34.181 lat (usec): min=132, max=428, avg=163.71, stdev=39.90 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 130], 00:35:34.181 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 137], 00:35:34.181 | 70.00th=[ 143], 80.00th=[ 182], 90.00th=[ 231], 95.00th=[ 243], 00:35:34.181 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 388], 00:35:34.181 | 99.99th=[ 388] 00:35:34.181 bw ( KiB/s): min= 6288, max= 6288, per=44.83%, avg=6288.00, stdev= 0.00, samples=1 00:35:34.181 iops : min= 1572, max= 1572, avg=1572.00, stdev= 0.00, samples=1 00:35:34.181 lat (usec) : 250=95.83%, 500=4.02% 00:35:34.181 lat (msec) : 50=0.15% 00:35:34.181 cpu : usr=1.40%, sys=4.30%, ctx=4030, majf=0, minf=1 00:35:34.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 issued rwts: total=1982,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.181 job1: (groupid=0, jobs=1): err= 0: pid=1336851: Mon Oct 14 17:52:32 2024 00:35:34.181 read: IOPS=58, BW=235KiB/s (240kB/s)(240KiB/1022msec) 00:35:34.181 slat (nsec): min=7012, max=24139, avg=11704.93, stdev=5203.87 00:35:34.181 clat (usec): min=189, max=41803, avg=15156.64, stdev=19769.76 00:35:34.181 lat (usec): min=202, max=41812, avg=15168.34, stdev=19768.60 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 229], 00:35:34.181 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 281], 00:35:34.181 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:35:34.181 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:34.181 | 99.99th=[41681] 00:35:34.181 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:35:34.181 slat (nsec): min=9462, max=35381, avg=11952.67, stdev=2583.18 00:35:34.181 clat (usec): min=142, max=332, avg=202.02, stdev=33.28 00:35:34.181 lat (usec): min=152, max=344, avg=213.98, stdev=34.05 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 165], 00:35:34.181 | 30.00th=[ 182], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:35:34.181 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 255], 00:35:34.181 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 334], 99.95th=[ 334], 00:35:34.181 | 99.99th=[ 334] 00:35:34.181 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.181 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.181 lat (usec) : 250=88.99%, 500=7.17% 00:35:34.181 lat (msec) : 50=3.85% 00:35:34.181 cpu : usr=0.29%, sys=0.59%, ctx=575, majf=0, minf=1 00:35:34.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.181 job2: (groupid=0, jobs=1): err= 0: pid=1336853: Mon Oct 14 17:52:32 2024 00:35:34.181 read: IOPS=170, BW=683KiB/s (700kB/s)(684KiB/1001msec) 00:35:34.181 slat (nsec): min=7690, max=25873, avg=10218.48, stdev=4624.12 00:35:34.181 clat (usec): min=232, max=41426, avg=5162.92, stdev=13217.71 00:35:34.181 lat (usec): min=240, max=41437, avg=5173.14, stdev=13222.16 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:35:34.181 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:35:34.181 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:35:34.181 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:34.181 | 99.99th=[41681] 00:35:34.181 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:34.181 slat (nsec): min=10961, max=39172, avg=12076.28, stdev=1691.75 00:35:34.181 clat (usec): min=139, max=408, avg=210.04, stdev=29.24 00:35:34.181 lat (usec): min=150, max=420, avg=222.12, stdev=29.61 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 188], 00:35:34.181 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:35:34.181 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 253], 00:35:34.181 | 99.00th=[ 285], 99.50th=[ 343], 99.90th=[ 408], 99.95th=[ 408], 00:35:34.181 | 99.99th=[ 408] 00:35:34.181 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.181 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.181 lat (usec) : 250=81.70%, 500=15.23% 00:35:34.181 lat (msec) : 50=3.07% 00:35:34.181 cpu : usr=0.50%, sys=0.60%, ctx=683, majf=0, minf=1 00:35:34.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.181 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.181 job3: (groupid=0, jobs=1): err= 0: pid=1336855: Mon Oct 14 17:52:32 2024 00:35:34.181 read: IOPS=175, BW=703KiB/s (720kB/s)(704KiB/1001msec) 00:35:34.181 slat (nsec): min=6676, max=27543, avg=9590.13, stdev=5463.20 00:35:34.181 clat (usec): min=222, max=41115, avg=5105.94, stdev=13235.18 00:35:34.181 lat (usec): min=230, max=41138, avg=5115.53, stdev=13240.12 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:35:34.181 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:35:34.181 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[40633], 95.00th=[41157], 00:35:34.181 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:34.181 | 99.99th=[41157] 00:35:34.181 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:34.181 slat (nsec): min=9764, max=38576, avg=10850.32, stdev=1510.77 00:35:34.181 clat (usec): min=151, max=348, avg=181.25, stdev=18.15 00:35:34.181 lat (usec): min=161, max=375, avg=192.10, stdev=18.62 00:35:34.181 clat percentiles (usec): 00:35:34.181 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:35:34.181 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:35:34.181 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:35:34.181 | 99.00th=[ 233], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 351], 00:35:34.182 | 99.99th=[ 351] 00:35:34.182 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:35:34.182 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:34.182 lat (usec) : 250=90.41%, 500=6.54% 00:35:34.182 lat (msec) : 50=3.05% 00:35:34.182 cpu : usr=0.30%, sys=0.70%, ctx=689, majf=0, minf=1 00:35:34.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.182 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:34.182 00:35:34.182 Run status group 0 (all jobs): 00:35:34.182 READ: bw=9350KiB/s (9575kB/s), 235KiB/s-7920KiB/s (240kB/s-8110kB/s), io=9556KiB (9785kB), run=1001-1022msec 00:35:34.182 WRITE: bw=13.7MiB/s (14.4MB/s), 2004KiB/s-8184KiB/s (2052kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1022msec 00:35:34.182 00:35:34.182 Disk stats (read/write): 00:35:34.182 nvme0n1: ios=1586/1835, merge=0/0, ticks=552/275, in_queue=827, util=87.17% 00:35:34.182 nvme0n2: ios=80/512, merge=0/0, ticks=1414/100, in_queue=1514, util=96.55% 00:35:34.182 nvme0n3: ios=18/512, merge=0/0, ticks=739/101, in_queue=840, util=88.98% 00:35:34.182 nvme0n4: ios=42/512, merge=0/0, ticks=1722/93, in_queue=1815, util=98.53% 00:35:34.182 17:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:34.182 [global] 00:35:34.182 thread=1 00:35:34.182 invalidate=1 00:35:34.182 rw=write 00:35:34.182 time_based=1 00:35:34.182 runtime=1 00:35:34.182 ioengine=libaio 00:35:34.182 direct=1 00:35:34.182 bs=4096 00:35:34.182 iodepth=128 00:35:34.182 norandommap=0 00:35:34.182 numjobs=1 00:35:34.182 00:35:34.182 verify_dump=1 00:35:34.182 verify_backlog=512 00:35:34.182 verify_state_save=0 00:35:34.182 do_verify=1 00:35:34.182 verify=crc32c-intel 00:35:34.182 [job0] 00:35:34.182 filename=/dev/nvme0n1 00:35:34.182 [job1] 00:35:34.182 filename=/dev/nvme0n2 00:35:34.182 [job2] 00:35:34.182 filename=/dev/nvme0n3 00:35:34.182 [job3] 00:35:34.182 filename=/dev/nvme0n4 00:35:34.182 Could not set queue depth (nvme0n1) 00:35:34.182 Could not set queue depth (nvme0n2) 00:35:34.182 Could not set queue depth (nvme0n3) 00:35:34.182 Could not set queue depth (nvme0n4) 00:35:34.182 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:34.182 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:34.182 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:34.182 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:34.182 fio-3.35 00:35:34.182 Starting 4 threads 00:35:35.559 00:35:35.560 job0: (groupid=0, jobs=1): err= 0: pid=1337220: Mon Oct 14 17:52:34 2024 00:35:35.560 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:35:35.560 slat (nsec): min=1411, max=12968k, avg=103934.24, stdev=681740.15 00:35:35.560 clat (usec): min=6695, max=46073, avg=13594.62, stdev=6513.66 00:35:35.560 lat (usec): min=6701, max=58965, avg=13698.56, stdev=6574.87 00:35:35.560 clat percentiles (usec): 00:35:35.560 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:35:35.560 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11863], 60.00th=[12256], 00:35:35.560 | 70.00th=[12387], 80.00th=[15139], 90.00th=[22676], 95.00th=[24511], 00:35:35.560 | 99.00th=[42730], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:35:35.560 | 99.99th=[45876] 00:35:35.560 write: IOPS=3442, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1006msec); 0 zone resets 00:35:35.560 slat (usec): min=2, max=34577, avg=190.65, stdev=1277.38 00:35:35.560 clat (usec): min=4608, max=84575, avg=24533.51, stdev=15310.73 00:35:35.560 lat (usec): min=6947, max=84589, avg=24724.15, stdev=15415.89 00:35:35.560 clat percentiles (usec): 00:35:35.560 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11338], 20.00th=[12256], 00:35:35.560 | 30.00th=[15401], 40.00th=[15926], 50.00th=[17695], 60.00th=[22676], 00:35:35.560 | 70.00th=[26346], 80.00th=[36439], 90.00th=[43254], 95.00th=[57934], 00:35:35.560 | 99.00th=[79168], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:35:35.560 | 99.99th=[84411] 00:35:35.560 bw ( KiB/s): min=12288, max=14400, per=20.42%, avg=13344.00, stdev=1493.41, samples=2 00:35:35.560 iops : min= 3072, max= 3600, avg=3336.00, stdev=373.35, samples=2 00:35:35.560 lat (msec) : 10=14.83%, 20=54.63%, 50=27.24%, 100=3.31% 00:35:35.560 cpu : usr=2.89%, sys=4.18%, ctx=345, majf=0, minf=1 00:35:35.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:35:35.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:35.560 issued rwts: total=3072,3463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:35.560 job1: (groupid=0, jobs=1): err= 0: pid=1337221: Mon Oct 14 17:52:34 2024 00:35:35.560 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1004msec) 00:35:35.560 slat (nsec): min=1555, max=17662k, avg=122750.93, stdev=860794.67 00:35:35.560 clat (usec): min=1924, max=56743, avg=14279.98, stdev=6742.23 00:35:35.560 lat (usec): min=4233, max=56751, avg=14402.73, stdev=6812.44 00:35:35.560 clat percentiles (usec): 00:35:35.560 | 1.00th=[ 7111], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 8717], 00:35:35.560 | 30.00th=[10814], 40.00th=[12387], 50.00th=[12780], 60.00th=[14091], 00:35:35.560 | 70.00th=[15270], 80.00th=[17695], 90.00th=[20841], 95.00th=[23987], 00:35:35.560 | 99.00th=[44827], 99.50th=[50594], 99.90th=[56886], 99.95th=[56886], 00:35:35.560 | 99.99th=[56886] 00:35:35.560 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:35:35.560 slat (usec): min=2, max=23719, avg=163.77, stdev=875.14 00:35:35.560 clat (usec): min=2885, max=56740, avg=22777.15, stdev=11690.57 00:35:35.560 lat (usec): min=2895, max=56772, avg=22940.92, stdev=11768.94 00:35:35.560 clat percentiles (usec): 00:35:35.560 | 1.00th=[ 4883], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[14615], 00:35:35.560 | 30.00th=[15926], 40.00th=[16581], 50.00th=[19006], 60.00th=[21627], 00:35:35.560 | 70.00th=[27395], 80.00th=[33424], 90.00th=[42730], 95.00th=[47973], 00:35:35.560 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[56886], 00:35:35.560 | 99.99th=[56886] 00:35:35.560 bw ( KiB/s): min=14128, max=14528, per=21.93%, avg=14328.00, stdev=282.84, samples=2 00:35:35.560 iops : min= 3532, max= 3632, avg=3582.00, stdev=70.71, samples=2 00:35:35.560 lat (msec) : 2=0.01%, 4=0.19%, 10=17.75%, 20=52.77%, 50=28.63% 00:35:35.560 lat (msec) : 100=0.63% 00:35:35.560 cpu : usr=2.89%, sys=4.59%, ctx=391, majf=0, minf=1 00:35:35.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:35.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:35.560 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:35.560 job2: (groupid=0, jobs=1): err= 0: pid=1337222: Mon Oct 14 17:52:34 2024 00:35:35.560 read: IOPS=3593, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1007msec) 00:35:35.560 slat (usec): min=2, max=17951, avg=121.63, stdev=995.71 00:35:35.560 clat (msec): min=4, max=104, avg=16.01, stdev=12.35 00:35:35.560 lat (msec): min=5, max=104, avg=16.13, stdev=12.47 00:35:35.560 clat percentiles (msec): 00:35:35.560 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:35:35.560 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:35:35.560 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 34], 00:35:35.560 | 99.00th=[ 88], 99.50th=[ 99], 99.90th=[ 105], 99.95th=[ 105], 00:35:35.560 | 99.99th=[ 105] 00:35:35.560 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:35:35.560 slat (usec): min=2, max=17639, avg=98.53, stdev=693.65 00:35:35.560 clat (msec): min=2, max=104, avg=17.05, stdev=14.47 00:35:35.560 lat (msec): min=2, max=104, avg=17.15, stdev=14.55 00:35:35.560 clat percentiles (msec): 00:35:35.560 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:35:35.560 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:35:35.560 | 70.00th=[ 16], 80.00th=[ 25], 90.00th=[ 39], 95.00th=[ 48], 00:35:35.560 | 99.00th=[ 74], 99.50th=[ 79], 99.90th=[ 89], 99.95th=[ 89], 00:35:35.560 | 99.99th=[ 105] 00:35:35.560 bw ( KiB/s): min=12048, max=19976, per=24.50%, avg=16012.00, stdev=5605.94, samples=2 00:35:35.560 iops : min= 3012, max= 4994, avg=4003.00, stdev=1401.49, samples=2 00:35:35.560 lat (msec) : 4=0.38%, 10=31.87%, 20=47.65%, 50=17.04%, 100=2.86% 00:35:35.560 lat (msec) : 250=0.19% 00:35:35.560 cpu : usr=3.18%, sys=5.77%, ctx=252, majf=0, minf=1 00:35:35.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:35.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:35.560 issued rwts: total=3619,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:35.560 job3: (groupid=0, jobs=1): err= 0: pid=1337223: Mon Oct 14 17:52:34 2024 00:35:35.560 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:35:35.560 slat (nsec): min=1233, max=21405k, avg=89847.32, stdev=742323.42 00:35:35.560 clat (usec): min=1235, max=63207, avg=12236.01, stdev=8156.07 00:35:35.560 lat (usec): min=1240, max=63231, avg=12325.86, stdev=8215.25 00:35:35.560 clat percentiles (usec): 00:35:35.560 | 1.00th=[ 2057], 5.00th=[ 5538], 10.00th=[ 6980], 20.00th=[ 8455], 00:35:35.560 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11076], 00:35:35.560 | 70.00th=[12256], 80.00th=[13435], 90.00th=[19530], 95.00th=[31065], 00:35:35.560 | 99.00th=[51119], 99.50th=[51643], 99.90th=[61080], 99.95th=[61080], 00:35:35.560 | 99.99th=[63177] 00:35:35.560 write: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(21.1MiB/1012msec); 0 zone resets 00:35:35.560 slat (usec): min=2, max=15758, avg=90.92, stdev=670.94 00:35:35.560 clat (usec): min=1228, max=103463, avg=12123.55, stdev=12042.58 00:35:35.560 lat (usec): min=1329, max=103475, avg=12214.47, stdev=12122.69 00:35:35.560 clat percentiles (msec): 00:35:35.560 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:35:35.560 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:35:35.560 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 16], 95.00th=[ 22], 00:35:35.560 | 99.00th=[ 81], 99.50th=[ 91], 99.90th=[ 104], 99.95th=[ 104], 00:35:35.560 | 99.99th=[ 104] 00:35:35.560 bw ( KiB/s): min=21008, max=21088, per=32.21%, avg=21048.00, stdev=56.57, samples=2 00:35:35.560 iops : min= 5252, max= 5272, avg=5262.00, stdev=14.14, samples=2 00:35:35.560 lat (msec) : 2=0.43%, 4=1.74%, 10=54.27%, 20=35.44%, 50=6.09% 00:35:35.560 lat (msec) : 100=1.97%, 250=0.06% 00:35:35.560 cpu : usr=3.96%, sys=6.73%, ctx=308, majf=0, minf=2 00:35:35.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:35.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:35.560 issued rwts: total=5120,5390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:35.560 00:35:35.560 Run status group 0 (all jobs): 00:35:35.560 READ: bw=57.9MiB/s (60.7MB/s), 11.9MiB/s-19.8MiB/s (12.5MB/s-20.7MB/s), io=58.6MiB (61.5MB), run=1004-1012msec 00:35:35.560 WRITE: bw=63.8MiB/s (66.9MB/s), 13.4MiB/s-20.8MiB/s (14.1MB/s-21.8MB/s), io=64.6MiB (67.7MB), run=1004-1012msec 00:35:35.560 00:35:35.560 Disk stats (read/write): 00:35:35.560 nvme0n1: ios=2590/2730, merge=0/0, ticks=17676/34984, in_queue=52660, util=98.10% 00:35:35.560 nvme0n2: ios=2711/3072, merge=0/0, ticks=37217/69153, in_queue=106370, util=98.48% 00:35:35.560 nvme0n3: ios=2901/3072, merge=0/0, ticks=48198/58490, in_queue=106688, util=98.24% 00:35:35.560 nvme0n4: ios=4740/5120, merge=0/0, ticks=39549/36552, in_queue=76101, util=98.54% 00:35:35.560 17:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:35.560 [global] 00:35:35.560 thread=1 00:35:35.560 invalidate=1 00:35:35.560 rw=randwrite 00:35:35.560 time_based=1 00:35:35.560 runtime=1 00:35:35.560 ioengine=libaio 00:35:35.560 direct=1 00:35:35.560 bs=4096 00:35:35.560 iodepth=128 00:35:35.560 norandommap=0 00:35:35.560 numjobs=1 00:35:35.560 00:35:35.560 verify_dump=1 00:35:35.560 verify_backlog=512 00:35:35.560 verify_state_save=0 00:35:35.560 do_verify=1 00:35:35.560 verify=crc32c-intel 00:35:35.560 [job0] 00:35:35.560 filename=/dev/nvme0n1 00:35:35.560 [job1] 00:35:35.560 filename=/dev/nvme0n2 00:35:35.560 [job2] 00:35:35.560 filename=/dev/nvme0n3 00:35:35.560 [job3] 00:35:35.560 filename=/dev/nvme0n4 00:35:35.560 Could not set queue depth (nvme0n1) 00:35:35.560 Could not set queue depth (nvme0n2) 00:35:35.560 Could not set queue depth (nvme0n3) 00:35:35.560 Could not set queue depth (nvme0n4) 00:35:35.820 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.820 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.820 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.820 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:35.820 fio-3.35 00:35:35.820 Starting 4 threads 00:35:37.224 00:35:37.224 job0: (groupid=0, jobs=1): err= 0: pid=1337596: Mon Oct 14 17:52:36 2024 00:35:37.224 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:35:37.224 slat (nsec): min=1434, max=12801k, avg=117495.17, stdev=744104.64 00:35:37.224 clat (usec): min=3356, max=49698, avg=12506.93, stdev=8502.88 00:35:37.224 lat (usec): min=3367, max=49708, avg=12624.43, stdev=8583.56 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 3982], 5.00th=[ 6325], 10.00th=[ 8225], 20.00th=[ 8455], 00:35:37.224 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10290], 00:35:37.224 | 70.00th=[12256], 80.00th=[14222], 90.00th=[17957], 95.00th=[32637], 00:35:37.224 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:35:37.224 | 99.99th=[49546] 00:35:37.224 write: IOPS=4736, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1013msec); 0 zone resets 00:35:37.224 slat (usec): min=2, max=11646, avg=88.28, stdev=532.99 00:35:37.224 clat (usec): min=2720, max=49661, avg=14679.97, stdev=8189.13 00:35:37.224 lat (usec): min=2730, max=49665, avg=14768.25, stdev=8214.61 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 3523], 5.00th=[ 5800], 10.00th=[ 7177], 20.00th=[ 8848], 00:35:37.224 | 30.00th=[10552], 40.00th=[12649], 50.00th=[14615], 60.00th=[15270], 00:35:37.224 | 70.00th=[15533], 80.00th=[15795], 90.00th=[21103], 95.00th=[39584], 00:35:37.224 | 99.00th=[42730], 99.50th=[43254], 99.90th=[49021], 99.95th=[49546], 00:35:37.224 | 99.99th=[49546] 00:35:37.224 bw ( KiB/s): min=16520, max=20848, per=28.61%, avg=18684.00, stdev=3060.36, samples=2 00:35:37.224 iops : min= 4130, max= 5212, avg=4671.00, stdev=765.09, samples=2 00:35:37.224 lat (msec) : 4=1.45%, 10=41.30%, 20=47.03%, 50=10.22% 00:35:37.224 cpu : usr=4.25%, sys=6.23%, ctx=444, majf=0, minf=1 00:35:37.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.224 issued rwts: total=4608,4798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.224 job1: (groupid=0, jobs=1): err= 0: pid=1337597: Mon Oct 14 17:52:36 2024 00:35:37.224 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:35:37.224 slat (nsec): min=1301, max=13739k, avg=89156.72, stdev=606171.33 00:35:37.224 clat (usec): min=3145, max=33538, avg=11433.97, stdev=4264.35 00:35:37.224 lat (usec): min=3155, max=33546, avg=11523.12, stdev=4299.08 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 3687], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 8586], 00:35:37.224 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10159], 00:35:37.224 | 70.00th=[12518], 80.00th=[15270], 90.00th=[18482], 95.00th=[20055], 00:35:37.224 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23725], 99.95th=[25035], 00:35:37.224 | 99.99th=[33424] 00:35:37.224 write: IOPS=4321, BW=16.9MiB/s (17.7MB/s)(17.1MiB/1014msec); 0 zone resets 00:35:37.224 slat (usec): min=2, max=17924, avg=139.07, stdev=779.16 00:35:37.224 clat (usec): min=2281, max=58238, avg=18605.43, stdev=10796.82 00:35:37.224 lat (usec): min=2292, max=58249, avg=18744.50, stdev=10852.67 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 3359], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[11338], 00:35:37.224 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15533], 60.00th=[15664], 00:35:37.224 | 70.00th=[15926], 80.00th=[23725], 90.00th=[39060], 95.00th=[42206], 00:35:37.224 | 99.00th=[54264], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:35:37.224 | 99.99th=[58459] 00:35:37.224 bw ( KiB/s): min=16520, max=17520, per=26.06%, avg=17020.00, stdev=707.11, samples=2 00:35:37.224 iops : min= 4130, max= 4380, avg=4255.00, stdev=176.78, samples=2 00:35:37.224 lat (msec) : 4=1.46%, 10=35.50%, 20=48.71%, 50=13.48%, 100=0.84% 00:35:37.224 cpu : usr=3.16%, sys=5.13%, ctx=451, majf=0, minf=2 00:35:37.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.224 issued rwts: total=4096,4382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.224 job2: (groupid=0, jobs=1): err= 0: pid=1337598: Mon Oct 14 17:52:36 2024 00:35:37.224 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:35:37.224 slat (nsec): min=1827, max=12666k, avg=114904.01, stdev=712791.70 00:35:37.224 clat (usec): min=7561, max=47056, avg=14305.53, stdev=6419.20 00:35:37.224 lat (usec): min=7569, max=47066, avg=14420.44, stdev=6481.37 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[10814], 20.00th=[10945], 00:35:37.224 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:35:37.224 | 70.00th=[13829], 80.00th=[14877], 90.00th=[24773], 95.00th=[30278], 00:35:37.224 | 99.00th=[38011], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:35:37.224 | 99.99th=[46924] 00:35:37.224 write: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1008msec); 0 zone resets 00:35:37.224 slat (usec): min=2, max=21081, avg=148.77, stdev=892.24 00:35:37.224 clat (usec): min=5381, max=54885, avg=20117.96, stdev=10949.37 00:35:37.224 lat (usec): min=5478, max=54917, avg=20266.73, stdev=11015.20 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:35:37.224 | 30.00th=[11994], 40.00th=[15533], 50.00th=[17695], 60.00th=[18220], 00:35:37.224 | 70.00th=[20317], 80.00th=[26608], 90.00th=[39584], 95.00th=[47973], 00:35:37.224 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[52167], 00:35:37.224 | 99.99th=[54789] 00:35:37.224 bw ( KiB/s): min=12920, max=16384, per=22.44%, avg=14652.00, stdev=2449.42, samples=2 00:35:37.224 iops : min= 3230, max= 4096, avg=3663.00, stdev=612.35, samples=2 00:35:37.224 lat (msec) : 10=3.66%, 20=73.88%, 50=21.93%, 100=0.53% 00:35:37.224 cpu : usr=3.57%, sys=5.26%, ctx=369, majf=0, minf=1 00:35:37.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:35:37.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.224 issued rwts: total=3584,3791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.224 job3: (groupid=0, jobs=1): err= 0: pid=1337599: Mon Oct 14 17:52:36 2024 00:35:37.224 read: IOPS=3299, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1005msec) 00:35:37.224 slat (nsec): min=1657, max=11670k, avg=114103.58, stdev=778114.36 00:35:37.224 clat (usec): min=3736, max=54975, avg=13810.33, stdev=6255.90 00:35:37.224 lat (usec): min=3748, max=54984, avg=13924.44, stdev=6317.84 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 5407], 5.00th=[ 6915], 10.00th=[ 9241], 20.00th=[ 9634], 00:35:37.224 | 30.00th=[10159], 40.00th=[10683], 50.00th=[12780], 60.00th=[13698], 00:35:37.224 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20579], 95.00th=[23725], 00:35:37.224 | 99.00th=[43779], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:35:37.224 | 99.99th=[54789] 00:35:37.224 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:35:37.224 slat (usec): min=2, max=11027, avg=158.89, stdev=711.29 00:35:37.224 clat (usec): min=1076, max=54965, avg=22680.15, stdev=12792.96 00:35:37.224 lat (usec): min=1086, max=54979, avg=22839.05, stdev=12880.50 00:35:37.224 clat percentiles (usec): 00:35:37.224 | 1.00th=[ 3785], 5.00th=[ 7832], 10.00th=[10159], 20.00th=[11338], 00:35:37.224 | 30.00th=[12518], 40.00th=[17171], 50.00th=[17957], 60.00th=[20841], 00:35:37.225 | 70.00th=[28967], 80.00th=[37487], 90.00th=[43254], 95.00th=[45351], 00:35:37.225 | 99.00th=[51643], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:35:37.225 | 99.99th=[54789] 00:35:37.225 bw ( KiB/s): min=13296, max=15376, per=21.95%, avg=14336.00, stdev=1470.78, samples=2 00:35:37.225 iops : min= 3324, max= 3844, avg=3584.00, stdev=367.70, samples=2 00:35:37.225 lat (msec) : 2=0.07%, 4=0.64%, 10=17.81%, 20=54.77%, 50=25.41% 00:35:37.225 lat (msec) : 100=1.30% 00:35:37.225 cpu : usr=2.89%, sys=5.18%, ctx=400, majf=0, minf=1 00:35:37.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:37.225 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:37.225 00:35:37.225 Run status group 0 (all jobs): 00:35:37.225 READ: bw=60.1MiB/s (63.0MB/s), 12.9MiB/s-17.8MiB/s (13.5MB/s-18.6MB/s), io=61.0MiB (63.9MB), run=1005-1014msec 00:35:37.225 WRITE: bw=63.8MiB/s (66.9MB/s), 13.9MiB/s-18.5MiB/s (14.6MB/s-19.4MB/s), io=64.7MiB (67.8MB), run=1005-1014msec 00:35:37.225 00:35:37.225 Disk stats (read/write): 00:35:37.225 nvme0n1: ios=3846/4096, merge=0/0, ticks=47450/57423, in_queue=104873, util=97.70% 00:35:37.225 nvme0n2: ios=3584/3647, merge=0/0, ticks=35906/57334, in_queue=93240, util=86.79% 00:35:37.225 nvme0n3: ios=3118/3263, merge=0/0, ticks=23253/28858, in_queue=52111, util=98.54% 00:35:37.225 nvme0n4: ios=2617/3072, merge=0/0, ticks=35590/68797, in_queue=104387, util=97.80% 00:35:37.225 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:37.225 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1337827 00:35:37.225 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:37.225 17:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:37.225 [global] 00:35:37.225 thread=1 00:35:37.225 invalidate=1 00:35:37.225 rw=read 00:35:37.225 time_based=1 00:35:37.225 runtime=10 00:35:37.225 ioengine=libaio 00:35:37.225 direct=1 00:35:37.225 bs=4096 00:35:37.225 iodepth=1 00:35:37.225 norandommap=1 00:35:37.225 numjobs=1 00:35:37.225 00:35:37.225 [job0] 00:35:37.225 filename=/dev/nvme0n1 00:35:37.225 [job1] 00:35:37.225 filename=/dev/nvme0n2 00:35:37.225 [job2] 00:35:37.225 filename=/dev/nvme0n3 00:35:37.225 [job3] 00:35:37.225 filename=/dev/nvme0n4 00:35:37.225 Could not set queue depth (nvme0n1) 00:35:37.225 Could not set queue depth (nvme0n2) 00:35:37.225 Could not set queue depth (nvme0n3) 00:35:37.225 Could not set queue depth (nvme0n4) 00:35:37.489 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:37.489 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:37.489 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:37.489 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:37.489 fio-3.35 00:35:37.489 Starting 4 threads 00:35:40.033 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:40.291 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:40.291 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=552960, buflen=4096 00:35:40.291 fio: pid=1337987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:40.550 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44634112, buflen=4096 00:35:40.550 fio: pid=1337982, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:40.550 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:40.550 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:40.809 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:40.809 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:40.809 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:35:40.809 fio: pid=1337965, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:40.809 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:40.809 17:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:40.809 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14954496, buflen=4096 00:35:40.809 fio: pid=1337968, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:41.068 00:35:41.068 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1337965: Mon Oct 14 17:52:39 2024 00:35:41.068 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(304KiB/3114msec) 00:35:41.068 slat (usec): min=12, max=5765, avg=97.40, stdev=654.44 00:35:41.068 clat (usec): min=395, max=44982, avg=40535.02, stdev=4701.38 00:35:41.068 lat (usec): min=431, max=46870, avg=40633.41, stdev=4755.09 00:35:41.068 clat percentiles (usec): 00:35:41.068 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:41.068 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:41.068 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:41.068 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:35:41.068 | 99.99th=[44827] 00:35:41.068 bw ( KiB/s): min= 93, max= 104, per=0.55%, avg=98.17, stdev= 4.67, samples=6 00:35:41.068 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:35:41.068 lat (usec) : 500=1.30% 00:35:41.068 lat (msec) : 50=97.40% 00:35:41.068 cpu : usr=0.13%, sys=0.00%, ctx=78, majf=0, minf=2 00:35:41.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.068 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.068 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:41.068 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1337968: Mon Oct 14 17:52:39 2024 00:35:41.068 read: IOPS=1093, BW=4374KiB/s (4479kB/s)(14.3MiB/3339msec) 00:35:41.068 slat (usec): min=6, max=14319, avg=15.75, stdev=296.49 00:35:41.068 clat (usec): min=185, max=50514, avg=891.02, stdev=5206.10 00:35:41.068 lat (usec): min=193, max=55612, avg=906.77, stdev=5269.09 00:35:41.068 clat percentiles (usec): 00:35:41.068 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:35:41.068 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:35:41.068 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 243], 00:35:41.068 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:41.068 | 99.99th=[50594] 00:35:41.068 bw ( KiB/s): min= 96, max=17552, per=27.48%, avg=4858.00, stdev=7639.02, samples=6 00:35:41.068 iops : min= 24, max= 4388, avg=1214.50, stdev=1909.76, samples=6 00:35:41.068 lat (usec) : 250=96.60%, 500=1.70%, 750=0.03% 00:35:41.068 lat (msec) : 50=1.62%, 100=0.03% 00:35:41.068 cpu : usr=0.60%, sys=1.89%, ctx=3654, majf=0, minf=2 00:35:41.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.068 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.068 issued rwts: total=3652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:41.069 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1337982: Mon Oct 14 17:52:39 2024 00:35:41.069 read: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(42.6MiB/2883msec) 00:35:41.069 slat (nsec): min=6332, max=57464, avg=7299.21, stdev=1050.55 00:35:41.069 clat (usec): min=194, max=553, avg=254.23, stdev=25.34 00:35:41.069 lat (usec): min=201, max=593, avg=261.53, stdev=25.60 00:35:41.069 clat percentiles (usec): 00:35:41.069 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 245], 00:35:41.069 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 251], 00:35:41.069 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:35:41.069 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 461], 99.95th=[ 506], 00:35:41.069 | 99.99th=[ 553] 00:35:41.069 bw ( KiB/s): min=15216, max=15512, per=87.13%, avg=15406.40, stdev=124.12, samples=5 00:35:41.069 iops : min= 3804, max= 3878, avg=3851.60, stdev=31.03, samples=5 00:35:41.069 lat (usec) : 250=49.19%, 500=50.72%, 750=0.07% 00:35:41.069 cpu : usr=1.08%, sys=3.37%, ctx=10899, majf=0, minf=1 00:35:41.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.069 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.069 issued rwts: total=10898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:41.069 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1337987: Mon Oct 14 17:52:39 2024 00:35:41.069 read: IOPS=50, BW=199KiB/s (204kB/s)(540KiB/2710msec) 00:35:41.069 slat (nsec): min=8269, max=36910, avg=16180.93, stdev=6890.18 00:35:41.069 clat (usec): min=202, max=41449, avg=19859.61, stdev=20407.84 00:35:41.069 lat (usec): min=212, max=41458, avg=19875.74, stdev=20406.04 00:35:41.069 clat percentiles (usec): 00:35:41.069 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 247], 00:35:41.069 | 30.00th=[ 269], 40.00th=[ 297], 50.00th=[ 322], 60.00th=[40633], 00:35:41.069 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:41.069 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:41.069 | 99.99th=[41681] 00:35:41.069 bw ( KiB/s): min= 192, max= 248, per=1.15%, avg=204.80, stdev=24.40, samples=5 00:35:41.069 iops : min= 48, max= 62, avg=51.20, stdev= 6.10, samples=5 00:35:41.069 lat (usec) : 250=22.79%, 500=28.68% 00:35:41.069 lat (msec) : 50=47.79% 00:35:41.069 cpu : usr=0.00%, sys=0.18%, ctx=136, majf=0, minf=2 00:35:41.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.069 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.069 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:41.069 00:35:41.069 Run status group 0 (all jobs): 00:35:41.069 READ: bw=17.3MiB/s (18.1MB/s), 97.6KiB/s-14.8MiB/s (100.0kB/s-15.5MB/s), io=57.7MiB (60.5MB), run=2710-3339msec 00:35:41.069 00:35:41.069 Disk stats (read/write): 00:35:41.069 nvme0n1: ios=75/0, merge=0/0, ticks=3041/0, in_queue=3041, util=94.14% 00:35:41.069 nvme0n2: ios=3650/0, merge=0/0, ticks=3168/0, in_queue=3168, util=94.67% 00:35:41.069 nvme0n3: ios=10760/0, merge=0/0, ticks=2677/0, in_queue=2677, util=96.20% 00:35:41.069 nvme0n4: ios=130/0, merge=0/0, ticks=2519/0, in_queue=2519, util=96.38% 00:35:41.069 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:41.069 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:41.328 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:41.328 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:41.588 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:41.588 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1337827 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:41.847 17:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:42.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:42.106 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:42.106 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:35:42.106 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:42.106 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:42.107 nvmf hotplug test: fio failed as expected 00:35:42.107 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.365 rmmod nvme_tcp 00:35:42.365 rmmod nvme_fabrics 00:35:42.365 rmmod nvme_keyring 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1335355 ']' 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1335355 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1335355 ']' 00:35:42.365 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1335355 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1335355 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1335355' 00:35:42.366 killing process with pid 1335355 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1335355 00:35:42.366 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1335355 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.625 17:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.531 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:44.531 00:35:44.531 real 0m25.748s 00:35:44.531 user 1m31.382s 00:35:44.531 sys 0m10.942s 00:35:44.531 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:44.531 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:44.531 ************************************ 00:35:44.531 END TEST nvmf_fio_target 00:35:44.531 ************************************ 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:44.790 ************************************ 00:35:44.790 START TEST nvmf_bdevio 00:35:44.790 ************************************ 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:44.790 * Looking for test storage... 00:35:44.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.790 --rc genhtml_branch_coverage=1 00:35:44.790 --rc genhtml_function_coverage=1 00:35:44.790 --rc genhtml_legend=1 00:35:44.790 --rc geninfo_all_blocks=1 00:35:44.790 --rc geninfo_unexecuted_blocks=1 00:35:44.790 00:35:44.790 ' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.790 --rc genhtml_branch_coverage=1 00:35:44.790 --rc genhtml_function_coverage=1 00:35:44.790 --rc genhtml_legend=1 00:35:44.790 --rc geninfo_all_blocks=1 00:35:44.790 --rc geninfo_unexecuted_blocks=1 00:35:44.790 00:35:44.790 ' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.790 --rc genhtml_branch_coverage=1 00:35:44.790 --rc genhtml_function_coverage=1 00:35:44.790 --rc genhtml_legend=1 00:35:44.790 --rc geninfo_all_blocks=1 00:35:44.790 --rc geninfo_unexecuted_blocks=1 00:35:44.790 00:35:44.790 ' 00:35:44.790 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.791 --rc genhtml_branch_coverage=1 00:35:44.791 --rc genhtml_function_coverage=1 00:35:44.791 --rc genhtml_legend=1 00:35:44.791 --rc geninfo_all_blocks=1 00:35:44.791 --rc geninfo_unexecuted_blocks=1 00:35:44.791 00:35:44.791 ' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:44.791 17:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:51.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.375 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:51.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:51.376 Found net devices under 0000:86:00.0: cvl_0_0 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:51.376 Found net devices under 0000:86:00.1: cvl_0_1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:35:51.376 00:35:51.376 --- 10.0.0.2 ping statistics --- 00:35:51.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.376 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:35:51.376 00:35:51.376 --- 10.0.0.1 ping statistics --- 00:35:51.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.376 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1342296 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1342296 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1342296 ']' 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:51.376 17:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.376 [2024-10-14 17:52:49.907242] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:51.376 [2024-10-14 17:52:49.908276] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:35:51.376 [2024-10-14 17:52:49.908315] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.376 [2024-10-14 17:52:49.984686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.376 [2024-10-14 17:52:50.034497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.376 [2024-10-14 17:52:50.034530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.376 [2024-10-14 17:52:50.034541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.376 [2024-10-14 17:52:50.034547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.376 [2024-10-14 17:52:50.034552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.376 [2024-10-14 17:52:50.037268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.376 [2024-10-14 17:52:50.037318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.376 [2024-10-14 17:52:50.037364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:51.376 [2024-10-14 17:52:50.037364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.376 [2024-10-14 17:52:50.105104] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:51.376 [2024-10-14 17:52:50.106434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:51.376 [2024-10-14 17:52:50.106896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:51.376 [2024-10-14 17:52:50.107102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:51.377 [2024-10-14 17:52:50.107162] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 [2024-10-14 17:52:50.182257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 Malloc0 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 [2024-10-14 17:52:50.258413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:51.377 { 00:35:51.377 "params": { 00:35:51.377 "name": "Nvme$subsystem", 00:35:51.377 "trtype": "$TEST_TRANSPORT", 00:35:51.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.377 "adrfam": "ipv4", 00:35:51.377 "trsvcid": "$NVMF_PORT", 00:35:51.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.377 "hdgst": ${hdgst:-false}, 00:35:51.377 "ddgst": ${ddgst:-false} 00:35:51.377 }, 00:35:51.377 "method": "bdev_nvme_attach_controller" 00:35:51.377 } 00:35:51.377 EOF 00:35:51.377 )") 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:35:51.377 17:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:51.377 "params": { 00:35:51.377 "name": "Nvme1", 00:35:51.377 "trtype": "tcp", 00:35:51.377 "traddr": "10.0.0.2", 00:35:51.377 "adrfam": "ipv4", 00:35:51.377 "trsvcid": "4420", 00:35:51.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.377 "hdgst": false, 00:35:51.377 "ddgst": false 00:35:51.377 }, 00:35:51.377 "method": "bdev_nvme_attach_controller" 00:35:51.377 }' 00:35:51.377 [2024-10-14 17:52:50.311202] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:35:51.377 [2024-10-14 17:52:50.311248] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342457 ] 00:35:51.377 [2024-10-14 17:52:50.379630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:51.377 [2024-10-14 17:52:50.425811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.377 [2024-10-14 17:52:50.425917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.377 [2024-10-14 17:52:50.425918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.637 I/O targets: 00:35:51.637 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:51.637 00:35:51.637 00:35:51.637 CUnit - A unit testing framework for C - Version 2.1-3 00:35:51.637 http://cunit.sourceforge.net/ 00:35:51.637 00:35:51.637 00:35:51.637 Suite: bdevio tests on: Nvme1n1 00:35:51.637 Test: blockdev write read block ...passed 00:35:51.637 Test: blockdev write zeroes read block ...passed 00:35:51.637 Test: blockdev write zeroes read no split ...passed 00:35:51.637 Test: blockdev write zeroes read split ...passed 00:35:51.637 Test: blockdev write zeroes read split partial ...passed 00:35:51.637 Test: blockdev reset ...[2024-10-14 17:52:50.769336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:51.637 [2024-10-14 17:52:50.769402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2400 (9): Bad file descriptor 00:35:51.897 [2024-10-14 17:52:50.814414] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.897 passed 00:35:51.897 Test: blockdev write read 8 blocks ...passed 00:35:51.897 Test: blockdev write read size > 128k ...passed 00:35:51.897 Test: blockdev write read invalid size ...passed 00:35:51.897 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.897 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.897 Test: blockdev write read max offset ...passed 00:35:51.897 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.897 Test: blockdev writev readv 8 blocks ...passed 00:35:52.156 Test: blockdev writev readv 30 x 1block ...passed 00:35:52.156 Test: blockdev writev readv block ...passed 00:35:52.156 Test: blockdev writev readv size > 128k ...passed 00:35:52.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:52.156 Test: blockdev comparev and writev ...[2024-10-14 17:52:51.153650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.156 [2024-10-14 17:52:51.153677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:52.156 [2024-10-14 17:52:51.153691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.156 [2024-10-14 17:52:51.153699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.153991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.154014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.154300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.154320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.154607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.154631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:52.157 [2024-10-14 17:52:51.154638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:52.157 passed 00:35:52.157 Test: blockdev nvme passthru rw ...passed 00:35:52.157 Test: blockdev nvme passthru vendor specific ...[2024-10-14 17:52:51.236941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:52.157 [2024-10-14 17:52:51.236961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.237072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:52.157 [2024-10-14 17:52:51.237081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.237192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:52.157 [2024-10-14 17:52:51.237201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:52.157 [2024-10-14 17:52:51.237306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:52.157 [2024-10-14 17:52:51.237315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:52.157 passed 00:35:52.157 Test: blockdev nvme admin passthru ...passed 00:35:52.157 Test: blockdev copy ...passed 00:35:52.157 00:35:52.157 Run Summary: Type Total Ran Passed Failed Inactive 00:35:52.157 suites 1 1 n/a 0 0 00:35:52.157 tests 23 23 23 0 0 00:35:52.157 asserts 152 152 152 0 n/a 00:35:52.157 00:35:52.157 Elapsed time = 1.351 seconds 00:35:52.416 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.416 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.416 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.417 rmmod nvme_tcp 00:35:52.417 rmmod nvme_fabrics 00:35:52.417 rmmod nvme_keyring 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1342296 ']' 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1342296 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1342296 ']' 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1342296 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:52.417 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342296 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342296' 00:35:52.676 killing process with pid 1342296 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1342296 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1342296 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.676 17:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.236 00:35:55.236 real 0m10.115s 00:35:55.236 user 0m9.430s 00:35:55.236 sys 0m5.210s 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 ************************************ 00:35:55.236 END TEST nvmf_bdevio 00:35:55.236 ************************************ 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:55.236 00:35:55.236 real 4m32.419s 00:35:55.236 user 9m8.694s 00:35:55.236 sys 1m51.294s 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.236 17:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 ************************************ 00:35:55.236 END TEST nvmf_target_core_interrupt_mode 00:35:55.236 ************************************ 00:35:55.236 17:52:53 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:55.236 17:52:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:55.236 17:52:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:55.236 17:52:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 ************************************ 00:35:55.236 START TEST nvmf_interrupt 00:35:55.236 ************************************ 00:35:55.236 17:52:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:55.236 * Looking for test storage... 00:35:55.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:55.236 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:55.236 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:55.236 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.237 --rc genhtml_branch_coverage=1 00:35:55.237 --rc genhtml_function_coverage=1 00:35:55.237 --rc genhtml_legend=1 00:35:55.237 --rc geninfo_all_blocks=1 00:35:55.237 --rc geninfo_unexecuted_blocks=1 00:35:55.237 00:35:55.237 ' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.237 --rc genhtml_branch_coverage=1 00:35:55.237 --rc genhtml_function_coverage=1 00:35:55.237 --rc genhtml_legend=1 00:35:55.237 --rc geninfo_all_blocks=1 00:35:55.237 --rc geninfo_unexecuted_blocks=1 00:35:55.237 00:35:55.237 ' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.237 --rc genhtml_branch_coverage=1 00:35:55.237 --rc genhtml_function_coverage=1 00:35:55.237 --rc genhtml_legend=1 00:35:55.237 --rc geninfo_all_blocks=1 00:35:55.237 --rc geninfo_unexecuted_blocks=1 00:35:55.237 00:35:55.237 ' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:55.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.237 --rc genhtml_branch_coverage=1 00:35:55.237 --rc genhtml_function_coverage=1 00:35:55.237 --rc genhtml_legend=1 00:35:55.237 --rc geninfo_all_blocks=1 00:35:55.237 --rc geninfo_unexecuted_blocks=1 00:35:55.237 00:35:55.237 ' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.237 17:52:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.824 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.824 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.824 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.824 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:01.825 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:01.825 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:01.825 Found net devices under 0000:86:00.0: cvl_0_0 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:01.825 Found net devices under 0000:86:00.1: cvl_0_1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.825 17:52:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:36:01.825 00:36:01.825 --- 10.0.0.2 ping statistics --- 00:36:01.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.825 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:36:01.825 00:36:01.825 --- 10.0.0.1 ping statistics --- 00:36:01.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.825 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1346100 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1346100 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1346100 ']' 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:01.825 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.825 [2024-10-14 17:53:00.123150] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.826 [2024-10-14 17:53:00.124091] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:36:01.826 [2024-10-14 17:53:00.124126] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.826 [2024-10-14 17:53:00.197898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:01.826 [2024-10-14 17:53:00.239174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.826 [2024-10-14 17:53:00.239206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.826 [2024-10-14 17:53:00.239213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.826 [2024-10-14 17:53:00.239218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.826 [2024-10-14 17:53:00.239224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.826 [2024-10-14 17:53:00.240408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.826 [2024-10-14 17:53:00.240411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.826 [2024-10-14 17:53:00.306965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:01.826 [2024-10-14 17:53:00.307706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:01.826 [2024-10-14 17:53:00.307864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:01.826 5000+0 records in 00:36:01.826 5000+0 records out 00:36:01.826 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178989 s, 572 MB/s 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 AIO0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 [2024-10-14 17:53:00.437216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:01.826 [2024-10-14 17:53:00.473448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1346100 0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 0 idle 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346100 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346100 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1346100 1 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 1 idle 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346141 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346141 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1346264 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1346100 0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1346100 0 busy 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:01.826 17:53:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346100 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346100 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.45 reactor_0 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1346100 1 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1346100 1 busy 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346141 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.29 reactor_1' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346141 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.29 reactor_1 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:02.086 17:53:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1346264 00:36:12.067 Initializing NVMe Controllers 00:36:12.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.067 Controller IO queue size 256, less than required. 00:36:12.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:12.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:12.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:12.067 Initialization complete. Launching workers. 00:36:12.067 ======================================================== 00:36:12.067 Latency(us) 00:36:12.067 Device Information : IOPS MiB/s Average min max 00:36:12.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16536.74 64.60 15488.23 3506.86 29721.46 00:36:12.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16420.14 64.14 15594.05 8167.82 26010.89 00:36:12.067 ======================================================== 00:36:12.067 Total : 32956.88 128.74 15540.95 3506.86 29721.46 00:36:12.067 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1346100 0 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 0 idle 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.067 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:12.068 17:53:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346100 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346100 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1346100 1 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 1 idle 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:12.068 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346141 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346141 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:12.327 17:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:12.587 17:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:12.587 17:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:36:12.587 17:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:36:12.587 17:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:36:12.587 17:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1346100 0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 0 idle 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346100 root 20 0 128.2g 72960 34560 S 0.0 0.0 0:20.47 reactor_0' 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346100 root 20 0 128.2g 72960 34560 S 0.0 0.0 0:20.47 reactor_0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1346100 1 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1346100 1 idle 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1346100 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1346100 -w 256 00:36:15.123 17:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1346141 root 20 0 128.2g 72960 34560 S 0.0 0.0 0:10.09 reactor_1' 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1346141 root 20 0 128.2g 72960 34560 S 0.0 0.0 0:10.09 reactor_1 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:15.123 17:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:15.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:15.383 rmmod nvme_tcp 00:36:15.383 rmmod nvme_fabrics 00:36:15.383 rmmod nvme_keyring 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1346100 ']' 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1346100 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1346100 ']' 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1346100 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346100 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346100' 00:36:15.383 killing process with pid 1346100 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1346100 00:36:15.383 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1346100 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:15.643 17:53:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.182 17:53:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:18.182 00:36:18.182 real 0m22.794s 00:36:18.182 user 0m39.622s 00:36:18.182 sys 0m8.348s 00:36:18.182 17:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:18.182 17:53:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:18.182 ************************************ 00:36:18.182 END TEST nvmf_interrupt 00:36:18.182 ************************************ 00:36:18.182 00:36:18.182 real 27m4.006s 00:36:18.182 user 56m4.142s 00:36:18.182 sys 9m12.334s 00:36:18.182 17:53:16 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:18.182 17:53:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.182 ************************************ 00:36:18.182 END TEST nvmf_tcp 00:36:18.182 ************************************ 00:36:18.182 17:53:16 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:18.182 17:53:16 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:18.182 17:53:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:18.182 17:53:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:18.182 17:53:16 -- common/autotest_common.sh@10 -- # set +x 00:36:18.182 ************************************ 00:36:18.182 START TEST spdkcli_nvmf_tcp 00:36:18.182 ************************************ 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:18.182 * Looking for test storage... 00:36:18.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:18.182 17:53:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.182 --rc genhtml_branch_coverage=1 00:36:18.182 --rc genhtml_function_coverage=1 00:36:18.182 --rc genhtml_legend=1 00:36:18.182 --rc geninfo_all_blocks=1 00:36:18.182 --rc geninfo_unexecuted_blocks=1 00:36:18.182 00:36:18.182 ' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.182 --rc genhtml_branch_coverage=1 00:36:18.182 --rc genhtml_function_coverage=1 00:36:18.182 --rc genhtml_legend=1 00:36:18.182 --rc geninfo_all_blocks=1 00:36:18.182 --rc geninfo_unexecuted_blocks=1 00:36:18.182 00:36:18.182 ' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.182 --rc genhtml_branch_coverage=1 00:36:18.182 --rc genhtml_function_coverage=1 00:36:18.182 --rc genhtml_legend=1 00:36:18.182 --rc geninfo_all_blocks=1 00:36:18.182 --rc geninfo_unexecuted_blocks=1 00:36:18.182 00:36:18.182 ' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.182 --rc genhtml_branch_coverage=1 00:36:18.182 --rc genhtml_function_coverage=1 00:36:18.182 --rc genhtml_legend=1 00:36:18.182 --rc geninfo_all_blocks=1 00:36:18.182 --rc geninfo_unexecuted_blocks=1 00:36:18.182 00:36:18.182 ' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.182 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1348953 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1348953 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1348953 ']' 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.183 [2024-10-14 17:53:17.094138] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:36:18.183 [2024-10-14 17:53:17.094185] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348953 ] 00:36:18.183 [2024-10-14 17:53:17.160873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:18.183 [2024-10-14 17:53:17.201331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.183 [2024-10-14 17:53:17.201331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:18.183 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:18.442 17:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:18.443 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:18.443 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:18.443 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:18.443 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:18.443 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:18.443 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:18.443 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:18.443 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:18.443 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:18.443 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:18.443 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:18.443 ' 00:36:20.977 [2024-10-14 17:53:20.038004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.355 [2024-10-14 17:53:21.378470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:24.892 [2024-10-14 17:53:23.866121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:27.428 [2024-10-14 17:53:26.028847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:28.805 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:28.805 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:28.805 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.805 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.805 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:28.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:28.805 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:28.805 17:53:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:29.374 17:53:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:29.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:29.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:29.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:29.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:29.374 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:29.374 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:29.374 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:29.374 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:29.374 ' 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:35.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:35.947 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:35.947 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:35.947 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1348953 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1348953 ']' 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1348953 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:36:35.947 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1348953 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1348953' 00:36:35.948 killing process with pid 1348953 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1348953 00:36:35.948 17:53:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1348953 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1348953 ']' 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1348953 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1348953 ']' 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1348953 00:36:35.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1348953) - No such process 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1348953 is not found' 00:36:35.948 Process with pid 1348953 is not found 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:35.948 00:36:35.948 real 0m17.317s 00:36:35.948 user 0m38.119s 00:36:35.948 sys 0m0.810s 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:35.948 17:53:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.948 ************************************ 00:36:35.948 END TEST spdkcli_nvmf_tcp 00:36:35.948 ************************************ 00:36:35.948 17:53:34 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:35.948 17:53:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:35.948 17:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:35.948 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:36:35.948 ************************************ 00:36:35.948 START TEST nvmf_identify_passthru 00:36:35.948 ************************************ 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:35.948 * Looking for test storage... 00:36:35.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.948 --rc genhtml_branch_coverage=1 00:36:35.948 --rc genhtml_function_coverage=1 00:36:35.948 --rc genhtml_legend=1 00:36:35.948 --rc geninfo_all_blocks=1 00:36:35.948 --rc geninfo_unexecuted_blocks=1 00:36:35.948 00:36:35.948 ' 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.948 --rc genhtml_branch_coverage=1 00:36:35.948 --rc genhtml_function_coverage=1 00:36:35.948 --rc genhtml_legend=1 00:36:35.948 --rc geninfo_all_blocks=1 00:36:35.948 --rc geninfo_unexecuted_blocks=1 00:36:35.948 00:36:35.948 ' 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.948 --rc genhtml_branch_coverage=1 00:36:35.948 --rc genhtml_function_coverage=1 00:36:35.948 --rc genhtml_legend=1 00:36:35.948 --rc geninfo_all_blocks=1 00:36:35.948 --rc geninfo_unexecuted_blocks=1 00:36:35.948 00:36:35.948 ' 00:36:35.948 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:35.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.948 --rc genhtml_branch_coverage=1 00:36:35.948 --rc genhtml_function_coverage=1 00:36:35.948 --rc genhtml_legend=1 00:36:35.948 --rc geninfo_all_blocks=1 00:36:35.948 --rc geninfo_unexecuted_blocks=1 00:36:35.948 00:36:35.948 ' 00:36:35.948 17:53:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.948 17:53:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.948 17:53:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.948 17:53:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.948 17:53:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.948 17:53:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:35.948 17:53:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.948 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:35.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.949 17:53:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.949 17:53:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.949 17:53:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.949 17:53:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.949 17:53:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.949 17:53:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.949 17:53:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.949 17:53:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.949 17:53:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:35.949 17:53:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.949 17:53:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.949 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.949 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:35.949 17:53:34 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.949 17:53:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:41.228 17:53:39 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:41.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:41.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:41.228 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:41.229 Found net devices under 0000:86:00.0: cvl_0_0 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:41.229 Found net devices under 0000:86:00.1: cvl_0_1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:41.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:41.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:36:41.229 00:36:41.229 --- 10.0.0.2 ping statistics --- 00:36:41.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.229 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:41.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:41.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:36:41.229 00:36:41.229 --- 10.0.0.1 ping statistics --- 00:36:41.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:41.229 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:41.229 17:53:40 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:41.229 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.229 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:41.229 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:41.489 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:41.489 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:36:41.489 17:53:40 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:36:41.489 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:41.489 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:41.489 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:41.489 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:41.489 17:53:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:46.767 17:53:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:36:46.767 17:53:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:46.767 17:53:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:46.767 17:53:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1356438 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:50.963 17:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1356438 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1356438 ']' 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:50.963 17:53:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.963 [2024-10-14 17:53:49.922238] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:36:50.963 [2024-10-14 17:53:49.922284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.963 [2024-10-14 17:53:49.994458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.963 [2024-10-14 17:53:50.041177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.963 [2024-10-14 17:53:50.041212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.963 [2024-10-14 17:53:50.041219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.963 [2024-10-14 17:53:50.041225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.963 [2024-10-14 17:53:50.041230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.963 [2024-10-14 17:53:50.042616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.963 [2024-10-14 17:53:50.042714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.963 [2024-10-14 17:53:50.042820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.963 [2024-10-14 17:53:50.042821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:36:50.963 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.963 INFO: Log level set to 20 00:36:50.963 INFO: Requests: 00:36:50.963 { 00:36:50.963 "jsonrpc": "2.0", 00:36:50.963 "method": "nvmf_set_config", 00:36:50.963 "id": 1, 00:36:50.963 "params": { 00:36:50.963 "admin_cmd_passthru": { 00:36:50.963 "identify_ctrlr": true 00:36:50.963 } 00:36:50.963 } 00:36:50.963 } 00:36:50.963 00:36:50.963 INFO: response: 00:36:50.963 { 00:36:50.963 "jsonrpc": "2.0", 00:36:50.963 "id": 1, 00:36:50.963 "result": true 00:36:50.963 } 00:36:50.963 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.963 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.963 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.963 INFO: Setting log level to 20 00:36:50.963 INFO: Setting log level to 20 00:36:50.963 INFO: Log level set to 20 00:36:50.963 INFO: Log level set to 20 00:36:50.963 INFO: Requests: 00:36:50.963 { 00:36:50.963 "jsonrpc": "2.0", 00:36:50.963 "method": "framework_start_init", 00:36:50.963 "id": 1 00:36:50.963 } 00:36:50.963 00:36:50.963 INFO: Requests: 00:36:50.963 { 00:36:50.963 "jsonrpc": "2.0", 00:36:50.963 "method": "framework_start_init", 00:36:50.963 "id": 1 00:36:50.963 } 00:36:50.963 00:36:51.223 [2024-10-14 17:53:50.150002] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:51.223 INFO: response: 00:36:51.223 { 00:36:51.223 "jsonrpc": "2.0", 00:36:51.223 "id": 1, 00:36:51.223 "result": true 00:36:51.223 } 00:36:51.223 00:36:51.223 INFO: response: 00:36:51.223 { 00:36:51.223 "jsonrpc": "2.0", 00:36:51.223 "id": 1, 00:36:51.223 "result": true 00:36:51.223 } 00:36:51.223 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.223 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.223 INFO: Setting log level to 40 00:36:51.223 INFO: Setting log level to 40 00:36:51.223 INFO: Setting log level to 40 00:36:51.223 [2024-10-14 17:53:50.163313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.223 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.223 17:53:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.223 17:53:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 Nvme0n1 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 [2024-10-14 17:53:53.083450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 [ 00:36:54.514 { 00:36:54.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:54.514 "subtype": "Discovery", 00:36:54.514 "listen_addresses": [], 00:36:54.514 "allow_any_host": true, 00:36:54.514 "hosts": [] 00:36:54.514 }, 00:36:54.514 { 00:36:54.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.514 "subtype": "NVMe", 00:36:54.514 "listen_addresses": [ 00:36:54.514 { 00:36:54.514 "trtype": "TCP", 00:36:54.514 "adrfam": "IPv4", 00:36:54.514 "traddr": "10.0.0.2", 00:36:54.514 "trsvcid": "4420" 00:36:54.514 } 00:36:54.514 ], 00:36:54.514 "allow_any_host": true, 00:36:54.514 "hosts": [], 00:36:54.514 "serial_number": "SPDK00000000000001", 00:36:54.514 "model_number": "SPDK bdev Controller", 00:36:54.514 "max_namespaces": 1, 00:36:54.514 "min_cntlid": 1, 00:36:54.514 "max_cntlid": 65519, 00:36:54.514 "namespaces": [ 00:36:54.514 { 00:36:54.514 "nsid": 1, 00:36:54.514 "bdev_name": "Nvme0n1", 00:36:54.514 "name": "Nvme0n1", 00:36:54.514 "nguid": "2CB0DF35318740D692E766728942559E", 00:36:54.514 "uuid": "2cb0df35-3187-40d6-92e7-66728942559e" 00:36:54.514 } 00:36:54.514 ] 00:36:54.514 } 00:36:54.514 ] 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:54.514 17:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:54.514 rmmod nvme_tcp 00:36:54.514 rmmod nvme_fabrics 00:36:54.514 rmmod nvme_keyring 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1356438 ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1356438 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1356438 ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1356438 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1356438 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:54.514 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:54.515 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1356438' 00:36:54.515 killing process with pid 1356438 00:36:54.515 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1356438 00:36:54.515 17:53:53 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1356438 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:57.052 17:53:55 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.052 17:53:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:57.052 17:53:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.960 17:53:57 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:58.960 00:36:58.960 real 0m23.502s 00:36:58.960 user 0m30.077s 00:36:58.960 sys 0m6.245s 00:36:58.960 17:53:57 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:58.960 17:53:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.960 ************************************ 00:36:58.960 END TEST nvmf_identify_passthru 00:36:58.960 ************************************ 00:36:58.960 17:53:57 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:58.960 17:53:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:58.960 17:53:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:58.960 17:53:57 -- common/autotest_common.sh@10 -- # set +x 00:36:58.960 ************************************ 00:36:58.960 START TEST nvmf_dif 00:36:58.960 ************************************ 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:58.960 * Looking for test storage... 00:36:58.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:58.960 17:53:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:58.960 17:53:57 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:58.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.960 --rc genhtml_branch_coverage=1 00:36:58.960 --rc genhtml_function_coverage=1 00:36:58.960 --rc genhtml_legend=1 00:36:58.960 --rc geninfo_all_blocks=1 00:36:58.960 --rc geninfo_unexecuted_blocks=1 00:36:58.960 00:36:58.960 ' 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.961 --rc genhtml_branch_coverage=1 00:36:58.961 --rc genhtml_function_coverage=1 00:36:58.961 --rc genhtml_legend=1 00:36:58.961 --rc geninfo_all_blocks=1 00:36:58.961 --rc geninfo_unexecuted_blocks=1 00:36:58.961 00:36:58.961 ' 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.961 --rc genhtml_branch_coverage=1 00:36:58.961 --rc genhtml_function_coverage=1 00:36:58.961 --rc genhtml_legend=1 00:36:58.961 --rc geninfo_all_blocks=1 00:36:58.961 --rc geninfo_unexecuted_blocks=1 00:36:58.961 00:36:58.961 ' 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.961 --rc genhtml_branch_coverage=1 00:36:58.961 --rc genhtml_function_coverage=1 00:36:58.961 --rc genhtml_legend=1 00:36:58.961 --rc geninfo_all_blocks=1 00:36:58.961 --rc geninfo_unexecuted_blocks=1 00:36:58.961 00:36:58.961 ' 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.961 17:53:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:58.961 17:53:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.961 17:53:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.961 17:53:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.961 17:53:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.961 17:53:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.961 17:53:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.961 17:53:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:58.961 17:53:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:58.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:58.961 17:53:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:58.961 17:53:57 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:58.961 17:53:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.531 17:54:03 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:05.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:05.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:05.532 Found net devices under 0000:86:00.0: cvl_0_0 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:05.532 Found net devices under 0000:86:00.1: cvl_0_1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:37:05.532 00:37:05.532 --- 10.0.0.2 ping statistics --- 00:37:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.532 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:37:05.532 00:37:05.532 --- 10.0.0.1 ping statistics --- 00:37:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.532 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:37:05.532 17:54:03 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:07.437 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:07.437 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:07.437 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:07.696 17:54:06 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:07.697 17:54:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:07.697 17:54:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:07.697 17:54:06 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.697 17:54:06 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1362026 00:37:07.697 17:54:06 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1362026 00:37:07.697 17:54:06 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1362026 ']' 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.697 17:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.697 [2024-10-14 17:54:06.795551] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:37:07.697 [2024-10-14 17:54:06.795597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.976 [2024-10-14 17:54:06.867994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.976 [2024-10-14 17:54:06.908471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.976 [2024-10-14 17:54:06.908506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.976 [2024-10-14 17:54:06.908513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.976 [2024-10-14 17:54:06.908519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.976 [2024-10-14 17:54:06.908524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.976 [2024-10-14 17:54:06.909039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.976 17:54:06 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:07.976 17:54:06 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:37:07.976 17:54:06 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:07.976 17:54:06 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.976 17:54:07 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.976 17:54:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:07.976 17:54:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.976 [2024-10-14 17:54:07.042496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.976 17:54:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.976 17:54:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.976 ************************************ 00:37:07.976 START TEST fio_dif_1_default 00:37:07.976 ************************************ 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.976 bdev_null0 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.976 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:08.338 [2024-10-14 17:54:07.114822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:08.338 { 00:37:08.338 "params": { 00:37:08.338 "name": "Nvme$subsystem", 00:37:08.338 "trtype": "$TEST_TRANSPORT", 00:37:08.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:08.338 "adrfam": "ipv4", 00:37:08.338 "trsvcid": "$NVMF_PORT", 00:37:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:08.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:08.338 "hdgst": ${hdgst:-false}, 00:37:08.338 "ddgst": ${ddgst:-false} 00:37:08.338 }, 00:37:08.338 "method": "bdev_nvme_attach_controller" 00:37:08.338 } 00:37:08.338 EOF 00:37:08.338 )") 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:08.338 "params": { 00:37:08.338 "name": "Nvme0", 00:37:08.338 "trtype": "tcp", 00:37:08.338 "traddr": "10.0.0.2", 00:37:08.338 "adrfam": "ipv4", 00:37:08.338 "trsvcid": "4420", 00:37:08.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.338 "hdgst": false, 00:37:08.338 "ddgst": false 00:37:08.338 }, 00:37:08.338 "method": "bdev_nvme_attach_controller" 00:37:08.338 }' 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:08.338 17:54:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:08.656 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:08.656 fio-3.35 00:37:08.656 Starting 1 thread 00:37:20.892 00:37:20.892 filename0: (groupid=0, jobs=1): err= 0: pid=1362408: Mon Oct 14 17:54:18 2024 00:37:20.892 read: IOPS=98, BW=394KiB/s (403kB/s)(3952KiB/10042msec) 00:37:20.892 slat (nsec): min=5722, max=32273, avg=6303.89, stdev=1257.86 00:37:20.892 clat (usec): min=388, max=45610, avg=40636.86, stdev=5813.12 00:37:20.892 lat (usec): min=394, max=45643, avg=40643.16, stdev=5813.18 00:37:20.892 clat percentiles (usec): 00:37:20.892 | 1.00th=[ 404], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:20.892 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:37:20.892 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:20.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:37:20.892 | 99.99th=[45351] 00:37:20.892 bw ( KiB/s): min= 383, max= 416, per=99.86%, avg=393.55, stdev=15.08, samples=20 00:37:20.892 iops : min= 95, max= 104, avg=98.35, stdev= 3.80, samples=20 00:37:20.892 lat (usec) : 500=2.02% 00:37:20.892 lat (msec) : 50=97.98% 00:37:20.892 cpu : usr=92.22%, sys=7.52%, ctx=11, majf=0, minf=0 00:37:20.892 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:20.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:20.892 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:20.892 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:20.892 00:37:20.892 Run status group 0 (all jobs): 00:37:20.892 READ: bw=394KiB/s (403kB/s), 394KiB/s-394KiB/s (403kB/s-403kB/s), io=3952KiB (4047kB), run=10042-10042msec 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 00:37:20.892 real 0m11.278s 00:37:20.892 user 0m16.278s 00:37:20.892 sys 0m1.082s 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 ************************************ 00:37:20.892 END TEST fio_dif_1_default 00:37:20.892 ************************************ 00:37:20.892 17:54:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:20.892 17:54:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:20.892 17:54:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 ************************************ 00:37:20.892 START TEST fio_dif_1_multi_subsystems 00:37:20.892 ************************************ 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 bdev_null0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 [2024-10-14 17:54:18.465687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 bdev_null1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:20.892 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:20.893 { 00:37:20.893 "params": { 00:37:20.893 "name": "Nvme$subsystem", 00:37:20.893 "trtype": "$TEST_TRANSPORT", 00:37:20.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.893 "adrfam": "ipv4", 00:37:20.893 "trsvcid": "$NVMF_PORT", 00:37:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.893 "hdgst": ${hdgst:-false}, 00:37:20.893 "ddgst": ${ddgst:-false} 00:37:20.893 }, 00:37:20.893 "method": "bdev_nvme_attach_controller" 00:37:20.893 } 00:37:20.893 EOF 00:37:20.893 )") 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:20.893 { 00:37:20.893 "params": { 00:37:20.893 "name": "Nvme$subsystem", 00:37:20.893 "trtype": "$TEST_TRANSPORT", 00:37:20.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:20.893 "adrfam": "ipv4", 00:37:20.893 "trsvcid": "$NVMF_PORT", 00:37:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:20.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:20.893 "hdgst": ${hdgst:-false}, 00:37:20.893 "ddgst": ${ddgst:-false} 00:37:20.893 }, 00:37:20.893 "method": "bdev_nvme_attach_controller" 00:37:20.893 } 00:37:20.893 EOF 00:37:20.893 )") 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:20.893 "params": { 00:37:20.893 "name": "Nvme0", 00:37:20.893 "trtype": "tcp", 00:37:20.893 "traddr": "10.0.0.2", 00:37:20.893 "adrfam": "ipv4", 00:37:20.893 "trsvcid": "4420", 00:37:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:20.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:20.893 "hdgst": false, 00:37:20.893 "ddgst": false 00:37:20.893 }, 00:37:20.893 "method": "bdev_nvme_attach_controller" 00:37:20.893 },{ 00:37:20.893 "params": { 00:37:20.893 "name": "Nvme1", 00:37:20.893 "trtype": "tcp", 00:37:20.893 "traddr": "10.0.0.2", 00:37:20.893 "adrfam": "ipv4", 00:37:20.893 "trsvcid": "4420", 00:37:20.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:20.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:20.893 "hdgst": false, 00:37:20.893 "ddgst": false 00:37:20.893 }, 00:37:20.893 "method": "bdev_nvme_attach_controller" 00:37:20.893 }' 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:20.893 17:54:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.893 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:20.893 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:20.893 fio-3.35 00:37:20.893 Starting 2 threads 00:37:30.873 00:37:30.873 filename0: (groupid=0, jobs=1): err= 0: pid=1364777: Mon Oct 14 17:54:29 2024 00:37:30.873 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:37:30.873 slat (nsec): min=5765, max=50308, avg=7576.30, stdev=2795.59 00:37:30.873 clat (usec): min=40792, max=42041, avg=41000.87, stdev=157.52 00:37:30.873 lat (usec): min=40798, max=42053, avg=41008.45, stdev=157.91 00:37:30.873 clat percentiles (usec): 00:37:30.873 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:30.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:30.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:30.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:30.873 | 99.99th=[42206] 00:37:30.873 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:37:30.873 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:30.873 lat (msec) : 50=100.00% 00:37:30.873 cpu : usr=97.03%, sys=2.72%, ctx=13, majf=0, minf=59 00:37:30.873 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.873 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.873 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.873 filename1: (groupid=0, jobs=1): err= 0: pid=1364778: Mon Oct 14 17:54:29 2024 00:37:30.873 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:37:30.873 slat (nsec): min=5805, max=50168, avg=7571.03, stdev=2908.33 00:37:30.873 clat (usec): min=40836, max=41987, avg=40996.53, stdev=137.36 00:37:30.873 lat (usec): min=40842, max=41999, avg=41004.10, stdev=137.75 00:37:30.873 clat percentiles (usec): 00:37:30.873 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:30.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:30.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:30.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:30.873 | 99.99th=[42206] 00:37:30.873 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:37:30.873 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:30.873 lat (msec) : 50=100.00% 00:37:30.873 cpu : usr=96.76%, sys=2.99%, ctx=5, majf=0, minf=196 00:37:30.873 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.873 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.873 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.873 00:37:30.873 Run status group 0 (all jobs): 00:37:30.873 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10009-10010msec 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.873 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 00:37:30.874 real 0m11.478s 00:37:30.874 user 0m26.285s 00:37:30.874 sys 0m0.937s 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 ************************************ 00:37:30.874 END TEST fio_dif_1_multi_subsystems 00:37:30.874 ************************************ 00:37:30.874 17:54:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:30.874 17:54:29 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:30.874 17:54:29 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 ************************************ 00:37:30.874 START TEST fio_dif_rand_params 00:37:30.874 ************************************ 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 bdev_null0 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.874 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:30.874 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.874 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.874 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.134 [2024-10-14 17:54:30.016682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:31.134 { 00:37:31.134 "params": { 00:37:31.134 "name": "Nvme$subsystem", 00:37:31.134 "trtype": "$TEST_TRANSPORT", 00:37:31.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:31.134 "adrfam": "ipv4", 00:37:31.134 "trsvcid": "$NVMF_PORT", 00:37:31.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:31.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:31.134 "hdgst": ${hdgst:-false}, 00:37:31.134 "ddgst": ${ddgst:-false} 00:37:31.134 }, 00:37:31.134 "method": "bdev_nvme_attach_controller" 00:37:31.134 } 00:37:31.134 EOF 00:37:31.134 )") 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:31.134 "params": { 00:37:31.134 "name": "Nvme0", 00:37:31.134 "trtype": "tcp", 00:37:31.134 "traddr": "10.0.0.2", 00:37:31.134 "adrfam": "ipv4", 00:37:31.134 "trsvcid": "4420", 00:37:31.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.134 "hdgst": false, 00:37:31.134 "ddgst": false 00:37:31.134 }, 00:37:31.134 "method": "bdev_nvme_attach_controller" 00:37:31.134 }' 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:31.134 17:54:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.393 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:31.393 ... 00:37:31.393 fio-3.35 00:37:31.393 Starting 3 threads 00:37:37.961 00:37:37.961 filename0: (groupid=0, jobs=1): err= 0: pid=1366733: Mon Oct 14 17:54:36 2024 00:37:37.961 read: IOPS=323, BW=40.4MiB/s (42.4MB/s)(204MiB/5047msec) 00:37:37.961 slat (nsec): min=6081, max=60750, avg=12582.53, stdev=5415.91 00:37:37.961 clat (usec): min=3338, max=52219, avg=9228.61, stdev=6729.96 00:37:37.961 lat (usec): min=3344, max=52249, avg=9241.19, stdev=6729.91 00:37:37.961 clat percentiles (usec): 00:37:37.961 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7308], 00:37:37.961 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8455], 00:37:37.961 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10290], 00:37:37.961 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51119], 99.95th=[52167], 00:37:37.961 | 99.99th=[52167] 00:37:37.961 bw ( KiB/s): min=32000, max=47104, per=34.64%, avg=41728.00, stdev=5045.50, samples=10 00:37:37.961 iops : min= 250, max= 368, avg=326.00, stdev=39.42, samples=10 00:37:37.961 lat (msec) : 4=1.29%, 10=92.28%, 20=3.55%, 50=2.51%, 100=0.37% 00:37:37.961 cpu : usr=94.65%, sys=5.01%, ctx=11, majf=0, minf=62 00:37:37.961 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 issued rwts: total=1633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.961 filename0: (groupid=0, jobs=1): err= 0: pid=1366734: Mon Oct 14 17:54:36 2024 00:37:37.961 read: IOPS=323, BW=40.5MiB/s (42.4MB/s)(203MiB/5003msec) 00:37:37.961 slat (nsec): min=6070, max=49774, avg=12146.16, stdev=5217.47 00:37:37.961 clat (usec): min=3243, max=49001, avg=9248.93, stdev=5174.39 00:37:37.961 lat (usec): min=3251, max=49030, avg=9261.07, stdev=5174.42 00:37:37.961 clat percentiles (usec): 00:37:37.961 | 1.00th=[ 3556], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 7242], 00:37:37.961 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:37:37.961 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:37:37.961 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:37:37.961 | 99.99th=[49021] 00:37:37.961 bw ( KiB/s): min=28928, max=46336, per=34.38%, avg=41420.80, stdev=4743.18, samples=10 00:37:37.961 iops : min= 226, max= 362, avg=323.60, stdev=37.06, samples=10 00:37:37.961 lat (msec) : 4=1.91%, 10=75.43%, 20=20.99%, 50=1.67% 00:37:37.961 cpu : usr=94.68%, sys=5.00%, ctx=11, majf=0, minf=64 00:37:37.961 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 issued rwts: total=1620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.961 filename0: (groupid=0, jobs=1): err= 0: pid=1366735: Mon Oct 14 17:54:36 2024 00:37:37.961 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5045msec) 00:37:37.961 slat (nsec): min=6138, max=40638, avg=11975.90, stdev=4800.82 00:37:37.961 clat (usec): min=3468, max=51003, avg=10068.06, stdev=6587.07 00:37:37.961 lat (usec): min=3475, max=51015, avg=10080.03, stdev=6586.87 00:37:37.961 clat percentiles (usec): 00:37:37.961 | 1.00th=[ 3589], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7898], 00:37:37.961 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:37:37.961 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11338], 95.00th=[11731], 00:37:37.961 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:37:37.961 | 99.99th=[51119] 00:37:37.961 bw ( KiB/s): min=29952, max=42752, per=31.75%, avg=38246.40, stdev=4501.20, samples=10 00:37:37.961 iops : min= 234, max= 334, avg=298.80, stdev=35.17, samples=10 00:37:37.961 lat (msec) : 4=2.27%, 10=64.66%, 20=30.33%, 50=2.20%, 100=0.53% 00:37:37.961 cpu : usr=95.32%, sys=4.38%, ctx=11, majf=0, minf=47 00:37:37.961 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:37.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:37.961 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:37.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:37.961 00:37:37.961 Run status group 0 (all jobs): 00:37:37.961 READ: bw=118MiB/s (123MB/s), 37.1MiB/s-40.5MiB/s (38.9MB/s-42.4MB/s), io=594MiB (623MB), run=5003-5047msec 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 bdev_null0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 [2024-10-14 17:54:36.300737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.961 bdev_null1 00:37:37.961 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 bdev_null2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:37.962 { 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme$subsystem", 00:37:37.962 "trtype": "$TEST_TRANSPORT", 00:37:37.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "$NVMF_PORT", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.962 "hdgst": ${hdgst:-false}, 00:37:37.962 "ddgst": ${ddgst:-false} 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 } 00:37:37.962 EOF 00:37:37.962 )") 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:37.962 { 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme$subsystem", 00:37:37.962 "trtype": "$TEST_TRANSPORT", 00:37:37.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "$NVMF_PORT", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.962 "hdgst": ${hdgst:-false}, 00:37:37.962 "ddgst": ${ddgst:-false} 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 } 00:37:37.962 EOF 00:37:37.962 )") 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:37.962 { 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme$subsystem", 00:37:37.962 "trtype": "$TEST_TRANSPORT", 00:37:37.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "$NVMF_PORT", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:37.962 "hdgst": ${hdgst:-false}, 00:37:37.962 "ddgst": ${ddgst:-false} 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 } 00:37:37.962 EOF 00:37:37.962 )") 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:37:37.962 17:54:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme0", 00:37:37.962 "trtype": "tcp", 00:37:37.962 "traddr": "10.0.0.2", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "4420", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:37.962 "hdgst": false, 00:37:37.962 "ddgst": false 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 },{ 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme1", 00:37:37.962 "trtype": "tcp", 00:37:37.962 "traddr": "10.0.0.2", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "4420", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:37.962 "hdgst": false, 00:37:37.962 "ddgst": false 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 },{ 00:37:37.962 "params": { 00:37:37.962 "name": "Nvme2", 00:37:37.962 "trtype": "tcp", 00:37:37.962 "traddr": "10.0.0.2", 00:37:37.962 "adrfam": "ipv4", 00:37:37.962 "trsvcid": "4420", 00:37:37.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:37.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:37.962 "hdgst": false, 00:37:37.962 "ddgst": false 00:37:37.962 }, 00:37:37.962 "method": "bdev_nvme_attach_controller" 00:37:37.962 }' 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:37.963 17:54:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:37.963 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.963 ... 00:37:37.963 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.963 ... 00:37:37.963 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:37.963 ... 00:37:37.963 fio-3.35 00:37:37.963 Starting 24 threads 00:37:50.177 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367783: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10020msec) 00:37:50.177 slat (nsec): min=7413, max=59161, avg=13542.69, stdev=4859.49 00:37:50.177 clat (usec): min=4476, max=34896, avg=29615.65, stdev=2855.44 00:37:50.177 lat (usec): min=4497, max=34913, avg=29629.19, stdev=2855.00 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[13960], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:50.177 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:50.177 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.177 | 99.00th=[31327], 99.50th=[32113], 99.90th=[34866], 99.95th=[34866], 00:37:50.177 | 99.99th=[34866] 00:37:50.177 bw ( KiB/s): min= 2048, max= 2560, per=4.20%, avg=2150.60, stdev=114.54, samples=20 00:37:50.177 iops : min= 512, max= 640, avg=537.65, stdev=28.63, samples=20 00:37:50.177 lat (msec) : 10=0.89%, 20=1.19%, 50=97.92% 00:37:50.177 cpu : usr=98.57%, sys=1.08%, ctx=14, majf=0, minf=54 00:37:50.177 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367784: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.8MiB/10015msec) 00:37:50.177 slat (nsec): min=7471, max=66755, avg=30332.78, stdev=9835.28 00:37:50.177 clat (usec): min=15405, max=40094, avg=29818.66, stdev=1205.74 00:37:50.177 lat (usec): min=15424, max=40110, avg=29848.99, stdev=1206.11 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[23200], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:50.177 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.177 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.177 | 99.00th=[31327], 99.50th=[32900], 99.90th=[40109], 99.95th=[40109], 00:37:50.177 | 99.99th=[40109] 00:37:50.177 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2124.80, stdev=64.34, samples=20 00:37:50.177 iops : min= 512, max= 544, avg=531.20, stdev=16.08, samples=20 00:37:50.177 lat (msec) : 20=0.45%, 50=99.55% 00:37:50.177 cpu : usr=98.64%, sys=1.00%, ctx=14, majf=0, minf=42 00:37:50.177 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367785: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.7MiB/10003msec) 00:37:50.177 slat (nsec): min=6812, max=80534, avg=16059.95, stdev=13047.14 00:37:50.177 clat (usec): min=3657, max=65874, avg=28684.54, stdev=5124.19 00:37:50.177 lat (usec): min=3663, max=65914, avg=28700.60, stdev=5125.01 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[17171], 5.00th=[19006], 10.00th=[20579], 20.00th=[25822], 00:37:50.177 | 30.00th=[27919], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.177 | 70.00th=[30016], 80.00th=[30278], 90.00th=[33162], 95.00th=[37487], 00:37:50.177 | 99.00th=[41157], 99.50th=[44827], 99.90th=[55313], 99.95th=[55313], 00:37:50.177 | 99.99th=[65799] 00:37:50.177 bw ( KiB/s): min= 2080, max= 2872, per=4.33%, avg=2216.42, stdev=176.82, samples=19 00:37:50.177 iops : min= 520, max= 718, avg=554.11, stdev=44.20, samples=19 00:37:50.177 lat (msec) : 4=0.11%, 10=0.25%, 20=6.14%, 50=93.21%, 100=0.29% 00:37:50.177 cpu : usr=98.56%, sys=1.09%, ctx=17, majf=0, minf=46 00:37:50.177 IO depths : 1=0.1%, 2=0.6%, 4=3.5%, 8=80.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 complete : 0=0.0%, 4=89.1%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 issued rwts: total=5566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367786: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10003msec) 00:37:50.177 slat (nsec): min=6777, max=86555, avg=50334.17, stdev=18711.47 00:37:50.177 clat (usec): min=5110, max=55395, avg=29656.04, stdev=2077.61 00:37:50.177 lat (usec): min=5118, max=55439, avg=29706.37, stdev=2079.75 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:37:50.177 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:37:50.177 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.177 | 99.00th=[31065], 99.50th=[34341], 99.90th=[55313], 99.95th=[55313], 00:37:50.177 | 99.99th=[55313] 00:37:50.177 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2115.37, stdev=78.31, samples=19 00:37:50.177 iops : min= 480, max= 544, avg=528.84, stdev=19.58, samples=19 00:37:50.177 lat (msec) : 10=0.30%, 20=0.30%, 50=99.10%, 100=0.30% 00:37:50.177 cpu : usr=98.76%, sys=0.86%, ctx=11, majf=0, minf=44 00:37:50.177 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367787: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=537, BW=2151KiB/s (2202kB/s)(21.0MiB/10010msec) 00:37:50.177 slat (nsec): min=7276, max=83409, avg=21072.12, stdev=15591.79 00:37:50.177 clat (usec): min=3988, max=36088, avg=29580.49, stdev=2990.87 00:37:50.177 lat (usec): min=3998, max=36097, avg=29601.56, stdev=2991.40 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[ 8979], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:50.177 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.177 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.177 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:37:50.177 | 99.99th=[35914] 00:37:50.177 bw ( KiB/s): min= 2048, max= 2608, per=4.19%, avg=2146.40, stdev=125.69, samples=20 00:37:50.177 iops : min= 512, max= 652, avg=536.60, stdev=31.42, samples=20 00:37:50.177 lat (msec) : 4=0.04%, 10=1.15%, 20=1.00%, 50=97.81% 00:37:50.177 cpu : usr=98.53%, sys=1.09%, ctx=14, majf=0, minf=54 00:37:50.177 IO depths : 1=5.8%, 2=11.9%, 4=24.3%, 8=51.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.177 issued rwts: total=5382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=1367788: Mon Oct 14 17:54:47 2024 00:37:50.177 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10005msec) 00:37:50.177 slat (nsec): min=5598, max=69503, avg=24987.93, stdev=10669.98 00:37:50.177 clat (usec): min=4796, max=56558, avg=29820.31, stdev=3016.24 00:37:50.177 lat (usec): min=4804, max=56572, avg=29845.29, stdev=3016.36 00:37:50.177 clat percentiles (usec): 00:37:50.177 | 1.00th=[17171], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:50.177 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.177 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.177 | 99.00th=[39060], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:37:50.177 | 99.99th=[56361] 00:37:50.177 bw ( KiB/s): min= 1923, max= 2192, per=4.13%, avg=2115.53, stdev=76.98, samples=19 00:37:50.177 iops : min= 480, max= 548, avg=528.84, stdev=19.35, samples=19 00:37:50.177 lat (msec) : 10=0.30%, 20=1.11%, 50=98.14%, 100=0.45% 00:37:50.177 cpu : usr=98.50%, sys=1.15%, ctx=15, majf=0, minf=41 00:37:50.177 IO depths : 1=5.4%, 2=11.0%, 4=22.7%, 8=53.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename0: (groupid=0, jobs=1): err= 0: pid=1367789: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:37:50.178 slat (nsec): min=9894, max=68688, avg=31635.61, stdev=9746.29 00:37:50.178 clat (usec): min=14921, max=39981, avg=29859.26, stdev=1094.65 00:37:50.178 lat (usec): min=14937, max=39997, avg=29890.89, stdev=1094.54 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.178 | 99.00th=[31327], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:37:50.178 | 99.99th=[40109] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2115.37, stdev=65.66, samples=19 00:37:50.178 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:37:50.178 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.178 cpu : usr=98.47%, sys=1.17%, ctx=13, majf=0, minf=37 00:37:50.178 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename0: (groupid=0, jobs=1): err= 0: pid=1367790: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=537, BW=2150KiB/s (2201kB/s)(21.0MiB/10004msec) 00:37:50.178 slat (nsec): min=7358, max=62377, avg=19030.89, stdev=10705.30 00:37:50.178 clat (usec): min=4325, max=35196, avg=29598.94, stdev=2918.95 00:37:50.178 lat (usec): min=4344, max=35218, avg=29617.97, stdev=2918.91 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[ 7439], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.178 | 99.00th=[31065], 99.50th=[32375], 99.90th=[35390], 99.95th=[35390], 00:37:50.178 | 99.99th=[35390] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2560, per=4.20%, avg=2149.05, stdev=117.46, samples=19 00:37:50.178 iops : min= 512, max= 640, avg=537.26, stdev=29.37, samples=19 00:37:50.178 lat (msec) : 10=1.19%, 20=0.60%, 50=98.21% 00:37:50.178 cpu : usr=98.34%, sys=1.22%, ctx=31, majf=0, minf=43 00:37:50.178 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367791: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=529, BW=2120KiB/s (2171kB/s)(20.7MiB/10005msec) 00:37:50.178 slat (nsec): min=4017, max=68794, avg=28336.08, stdev=11210.19 00:37:50.178 clat (usec): min=15076, max=61368, avg=29925.61, stdev=2246.64 00:37:50.178 lat (usec): min=15099, max=61381, avg=29953.95, stdev=2245.97 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[25822], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.178 | 99.00th=[34866], 99.50th=[36439], 99.90th=[61080], 99.95th=[61604], 00:37:50.178 | 99.99th=[61604] 00:37:50.178 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2111.16, stdev=77.01, samples=19 00:37:50.178 iops : min= 480, max= 544, avg=527.79, stdev=19.25, samples=19 00:37:50.178 lat (msec) : 20=0.72%, 50=98.98%, 100=0.30% 00:37:50.178 cpu : usr=98.43%, sys=1.21%, ctx=15, majf=0, minf=42 00:37:50.178 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367792: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.8MiB/10015msec) 00:37:50.178 slat (nsec): min=8028, max=65615, avg=25559.03, stdev=10811.23 00:37:50.178 clat (usec): min=15346, max=34084, avg=29870.89, stdev=1093.53 00:37:50.178 lat (usec): min=15375, max=34101, avg=29896.44, stdev=1093.07 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[25560], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:50.178 | 99.00th=[31065], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:37:50.178 | 99.99th=[34341] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.178 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.178 lat (msec) : 20=0.34%, 50=99.66% 00:37:50.178 cpu : usr=98.47%, sys=1.16%, ctx=19, majf=0, minf=33 00:37:50.178 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367793: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10005msec) 00:37:50.178 slat (nsec): min=7364, max=79034, avg=16919.01, stdev=12602.60 00:37:50.178 clat (usec): min=4300, max=36293, avg=29516.33, stdev=3078.26 00:37:50.178 lat (usec): min=4312, max=36302, avg=29533.25, stdev=3077.33 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[ 9503], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:50.178 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:37:50.178 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.178 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34866], 99.95th=[35390], 00:37:50.178 | 99.99th=[36439] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2560, per=4.20%, avg=2152.80, stdev=113.70, samples=20 00:37:50.178 iops : min= 512, max= 640, avg=538.20, stdev=28.42, samples=20 00:37:50.178 lat (msec) : 10=1.30%, 20=1.06%, 50=97.65% 00:37:50.178 cpu : usr=98.33%, sys=1.30%, ctx=13, majf=0, minf=44 00:37:50.178 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367794: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=529, BW=2118KiB/s (2169kB/s)(20.7MiB/10001msec) 00:37:50.178 slat (nsec): min=5942, max=66178, avg=28186.23, stdev=11508.04 00:37:50.178 clat (usec): min=24182, max=46408, avg=29975.13, stdev=1018.70 00:37:50.178 lat (usec): min=24192, max=46425, avg=30003.32, stdev=1016.32 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:37:50.178 | 99.00th=[34866], 99.50th=[35390], 99.90th=[42206], 99.95th=[46400], 00:37:50.178 | 99.99th=[46400] 00:37:50.178 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2115.37, stdev=77.03, samples=19 00:37:50.178 iops : min= 480, max= 544, avg=528.84, stdev=19.26, samples=19 00:37:50.178 lat (msec) : 50=100.00% 00:37:50.178 cpu : usr=98.60%, sys=1.03%, ctx=15, majf=0, minf=46 00:37:50.178 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367795: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:37:50.178 slat (nsec): min=8173, max=83660, avg=29879.67, stdev=16595.69 00:37:50.178 clat (usec): min=15198, max=33981, avg=29799.79, stdev=1137.20 00:37:50.178 lat (usec): min=15215, max=34009, avg=29829.67, stdev=1137.14 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[24511], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:50.178 | 99.00th=[31065], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:37:50.178 | 99.99th=[33817] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.178 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.178 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.178 cpu : usr=98.44%, sys=1.16%, ctx=14, majf=0, minf=63 00:37:50.178 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.178 filename1: (groupid=0, jobs=1): err= 0: pid=1367796: Mon Oct 14 17:54:47 2024 00:37:50.178 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.8MiB/10015msec) 00:37:50.178 slat (nsec): min=7772, max=64143, avg=26389.70, stdev=10616.99 00:37:50.178 clat (usec): min=15466, max=33955, avg=29861.15, stdev=1078.89 00:37:50.178 lat (usec): min=15499, max=33981, avg=29887.54, stdev=1078.85 00:37:50.178 clat percentiles (usec): 00:37:50.178 | 1.00th=[25822], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:50.178 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.178 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:50.178 | 99.00th=[31065], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:37:50.178 | 99.99th=[33817] 00:37:50.178 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.178 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.178 lat (msec) : 20=0.34%, 50=99.66% 00:37:50.178 cpu : usr=98.64%, sys=1.00%, ctx=17, majf=0, minf=47 00:37:50.178 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.178 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename1: (groupid=0, jobs=1): err= 0: pid=1367797: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:37:50.179 slat (nsec): min=7471, max=83703, avg=29716.77, stdev=16020.21 00:37:50.179 clat (usec): min=15212, max=34063, avg=29799.06, stdev=1134.69 00:37:50.179 lat (usec): min=15229, max=34079, avg=29828.78, stdev=1134.79 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[24773], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[31327], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:37:50.179 | 99.99th=[33817] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.179 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.179 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.179 cpu : usr=98.55%, sys=1.08%, ctx=8, majf=0, minf=44 00:37:50.179 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename1: (groupid=0, jobs=1): err= 0: pid=1367798: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:37:50.179 slat (nsec): min=6173, max=67703, avg=31250.32, stdev=9278.87 00:37:50.179 clat (usec): min=15104, max=39075, avg=29860.45, stdev=1060.72 00:37:50.179 lat (usec): min=15127, max=39092, avg=29891.70, stdev=1060.54 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[31327], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:37:50.179 | 99.99th=[39060] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2115.37, stdev=65.66, samples=19 00:37:50.179 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:37:50.179 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.179 cpu : usr=98.40%, sys=1.24%, ctx=12, majf=0, minf=31 00:37:50.179 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367799: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.8MiB/10015msec) 00:37:50.179 slat (nsec): min=8037, max=65779, avg=24022.15, stdev=11546.68 00:37:50.179 clat (usec): min=15287, max=34077, avg=29879.15, stdev=1095.05 00:37:50.179 lat (usec): min=15296, max=34100, avg=29903.17, stdev=1094.62 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[26084], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[31065], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:37:50.179 | 99.99th=[33817] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.179 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.179 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.179 cpu : usr=98.22%, sys=1.41%, ctx=13, majf=0, minf=38 00:37:50.179 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367800: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10010msec) 00:37:50.179 slat (nsec): min=6462, max=67006, avg=31480.28, stdev=10090.43 00:37:50.179 clat (usec): min=14712, max=38862, avg=29852.20, stdev=1122.82 00:37:50.179 lat (usec): min=14721, max=38880, avg=29883.68, stdev=1123.04 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[32900], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:37:50.179 | 99.99th=[39060] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2115.58, stdev=65.44, samples=19 00:37:50.179 iops : min= 512, max= 544, avg=528.89, stdev=16.36, samples=19 00:37:50.179 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.179 cpu : usr=98.63%, sys=1.01%, ctx=15, majf=0, minf=29 00:37:50.179 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367801: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=532, BW=2132KiB/s (2183kB/s)(20.8MiB/10001msec) 00:37:50.179 slat (nsec): min=4887, max=64322, avg=29373.89, stdev=11103.35 00:37:50.179 clat (usec): min=13645, max=61676, avg=29748.16, stdev=2663.80 00:37:50.179 lat (usec): min=13654, max=61690, avg=29777.53, stdev=2664.80 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[17957], 5.00th=[28967], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:50.179 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[36439], 99.50th=[43254], 99.90th=[57934], 99.95th=[57934], 00:37:50.179 | 99.99th=[61604] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.95, stdev=62.44, samples=19 00:37:50.179 iops : min= 512, max= 544, avg=530.74, stdev=15.61, samples=19 00:37:50.179 lat (msec) : 20=1.50%, 50=98.20%, 100=0.30% 00:37:50.179 cpu : usr=98.43%, sys=1.22%, ctx=11, majf=0, minf=25 00:37:50.179 IO depths : 1=5.2%, 2=11.0%, 4=23.6%, 8=52.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367802: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10009msec) 00:37:50.179 slat (nsec): min=9442, max=69296, avg=31263.02, stdev=9464.98 00:37:50.179 clat (usec): min=15136, max=39965, avg=29871.06, stdev=1082.39 00:37:50.179 lat (usec): min=15151, max=39982, avg=29902.32, stdev=1082.04 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.179 | 99.00th=[31589], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:37:50.179 | 99.99th=[40109] 00:37:50.179 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2115.37, stdev=65.66, samples=19 00:37:50.179 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:37:50.179 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.179 cpu : usr=98.62%, sys=1.01%, ctx=12, majf=0, minf=35 00:37:50.179 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367803: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10001msec) 00:37:50.179 slat (nsec): min=5444, max=77696, avg=19526.20, stdev=10931.26 00:37:50.179 clat (usec): min=10414, max=58135, avg=29922.90, stdev=3780.38 00:37:50.179 lat (usec): min=10430, max=58151, avg=29942.43, stdev=3779.36 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[15270], 5.00th=[26084], 10.00th=[29230], 20.00th=[29754], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30802], 95.00th=[33162], 00:37:50.179 | 99.00th=[45876], 99.50th=[50594], 99.90th=[57934], 99.95th=[57934], 00:37:50.179 | 99.99th=[57934] 00:37:50.179 bw ( KiB/s): min= 1920, max= 2216, per=4.14%, avg=2118.74, stdev=78.58, samples=19 00:37:50.179 iops : min= 480, max= 554, avg=529.68, stdev=19.64, samples=19 00:37:50.179 lat (msec) : 20=1.92%, 50=97.48%, 100=0.60% 00:37:50.179 cpu : usr=98.38%, sys=1.27%, ctx=10, majf=0, minf=42 00:37:50.179 IO depths : 1=3.9%, 2=8.4%, 4=18.5%, 8=59.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=92.5%, 8=2.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.179 filename2: (groupid=0, jobs=1): err= 0: pid=1367804: Mon Oct 14 17:54:47 2024 00:37:50.179 read: IOPS=532, BW=2132KiB/s (2183kB/s)(20.8MiB/10005msec) 00:37:50.179 slat (nsec): min=4861, max=69151, avg=25959.87, stdev=11075.48 00:37:50.179 clat (usec): min=4663, max=60119, avg=29779.37, stdev=2557.59 00:37:50.179 lat (usec): min=4677, max=60133, avg=29805.33, stdev=2557.66 00:37:50.179 clat percentiles (usec): 00:37:50.179 | 1.00th=[20317], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:37:50.179 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:37:50.179 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:37:50.179 | 99.00th=[34866], 99.50th=[39584], 99.90th=[56361], 99.95th=[56361], 00:37:50.179 | 99.99th=[60031] 00:37:50.179 bw ( KiB/s): min= 1923, max= 2240, per=4.14%, avg=2117.21, stdev=81.02, samples=19 00:37:50.179 iops : min= 480, max= 560, avg=529.26, stdev=20.35, samples=19 00:37:50.179 lat (msec) : 10=0.30%, 20=0.60%, 50=98.80%, 100=0.30% 00:37:50.179 cpu : usr=98.50%, sys=1.13%, ctx=36, majf=0, minf=33 00:37:50.179 IO depths : 1=5.7%, 2=11.6%, 4=23.7%, 8=52.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:50.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.179 issued rwts: total=5332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.180 filename2: (groupid=0, jobs=1): err= 0: pid=1367805: Mon Oct 14 17:54:47 2024 00:37:50.180 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10017msec) 00:37:50.180 slat (nsec): min=6654, max=87905, avg=15269.95, stdev=7515.67 00:37:50.180 clat (usec): min=15530, max=34889, avg=29938.64, stdev=979.81 00:37:50.180 lat (usec): min=15618, max=34905, avg=29953.91, stdev=978.81 00:37:50.180 clat percentiles (usec): 00:37:50.180 | 1.00th=[26608], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:37:50.180 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:37:50.180 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:37:50.180 | 99.00th=[31589], 99.50th=[32113], 99.90th=[34866], 99.95th=[34866], 00:37:50.180 | 99.99th=[34866] 00:37:50.180 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2124.80, stdev=64.34, samples=20 00:37:50.180 iops : min= 512, max= 544, avg=531.20, stdev=16.08, samples=20 00:37:50.180 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.180 cpu : usr=98.42%, sys=1.22%, ctx=15, majf=0, minf=57 00:37:50.180 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:50.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.180 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.180 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.180 filename2: (groupid=0, jobs=1): err= 0: pid=1367806: Mon Oct 14 17:54:47 2024 00:37:50.180 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10017msec) 00:37:50.180 slat (nsec): min=7241, max=95041, avg=43174.71, stdev=21893.63 00:37:50.180 clat (usec): min=17465, max=34867, avg=29697.24, stdev=984.45 00:37:50.180 lat (usec): min=17475, max=34933, avg=29740.42, stdev=984.42 00:37:50.180 clat percentiles (usec): 00:37:50.180 | 1.00th=[26608], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:37:50.180 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:37:50.180 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:37:50.180 | 99.00th=[31327], 99.50th=[31589], 99.90th=[34341], 99.95th=[34866], 00:37:50.180 | 99.99th=[34866] 00:37:50.180 bw ( KiB/s): min= 2048, max= 2180, per=4.15%, avg=2125.00, stdev=64.51, samples=20 00:37:50.180 iops : min= 512, max= 545, avg=531.25, stdev=16.13, samples=20 00:37:50.180 lat (msec) : 20=0.30%, 50=99.70% 00:37:50.180 cpu : usr=98.39%, sys=1.21%, ctx=20, majf=0, minf=30 00:37:50.180 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:50.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.180 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.180 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:50.180 00:37:50.180 Run status group 0 (all jobs): 00:37:50.180 READ: bw=50.0MiB/s (52.4MB/s), 2118KiB/s-2226KiB/s (2169kB/s-2279kB/s), io=501MiB (525MB), run=10001-10020msec 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 bdev_null0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 [2024-10-14 17:54:47.870324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 bdev_null1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:50.180 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:50.180 { 00:37:50.180 "params": { 00:37:50.181 "name": "Nvme$subsystem", 00:37:50.181 "trtype": "$TEST_TRANSPORT", 00:37:50.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.181 "adrfam": "ipv4", 00:37:50.181 "trsvcid": "$NVMF_PORT", 00:37:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.181 "hdgst": ${hdgst:-false}, 00:37:50.181 "ddgst": ${ddgst:-false} 00:37:50.181 }, 00:37:50.181 "method": "bdev_nvme_attach_controller" 00:37:50.181 } 00:37:50.181 EOF 00:37:50.181 )") 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:50.181 { 00:37:50.181 "params": { 00:37:50.181 "name": "Nvme$subsystem", 00:37:50.181 "trtype": "$TEST_TRANSPORT", 00:37:50.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.181 "adrfam": "ipv4", 00:37:50.181 "trsvcid": "$NVMF_PORT", 00:37:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.181 "hdgst": ${hdgst:-false}, 00:37:50.181 "ddgst": ${ddgst:-false} 00:37:50.181 }, 00:37:50.181 "method": "bdev_nvme_attach_controller" 00:37:50.181 } 00:37:50.181 EOF 00:37:50.181 )") 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:50.181 "params": { 00:37:50.181 "name": "Nvme0", 00:37:50.181 "trtype": "tcp", 00:37:50.181 "traddr": "10.0.0.2", 00:37:50.181 "adrfam": "ipv4", 00:37:50.181 "trsvcid": "4420", 00:37:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.181 "hdgst": false, 00:37:50.181 "ddgst": false 00:37:50.181 }, 00:37:50.181 "method": "bdev_nvme_attach_controller" 00:37:50.181 },{ 00:37:50.181 "params": { 00:37:50.181 "name": "Nvme1", 00:37:50.181 "trtype": "tcp", 00:37:50.181 "traddr": "10.0.0.2", 00:37:50.181 "adrfam": "ipv4", 00:37:50.181 "trsvcid": "4420", 00:37:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:50.181 "hdgst": false, 00:37:50.181 "ddgst": false 00:37:50.181 }, 00:37:50.181 "method": "bdev_nvme_attach_controller" 00:37:50.181 }' 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:50.181 17:54:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.181 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:50.181 ... 00:37:50.181 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:50.181 ... 00:37:50.181 fio-3.35 00:37:50.181 Starting 4 threads 00:37:55.486 00:37:55.486 filename0: (groupid=0, jobs=1): err= 0: pid=1369752: Mon Oct 14 17:54:53 2024 00:37:55.486 read: IOPS=2919, BW=22.8MiB/s (23.9MB/s)(114MiB/5003msec) 00:37:55.486 slat (nsec): min=5966, max=28912, avg=8785.74, stdev=2987.60 00:37:55.486 clat (usec): min=640, max=5587, avg=2712.32, stdev=474.95 00:37:55.486 lat (usec): min=656, max=5601, avg=2721.11, stdev=474.62 00:37:55.486 clat percentiles (usec): 00:37:55.486 | 1.00th=[ 1549], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2409], 00:37:55.486 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2769], 00:37:55.486 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3195], 95.00th=[ 3589], 00:37:55.486 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 5014], 99.95th=[ 5145], 00:37:55.486 | 99.99th=[ 5473] 00:37:55.486 bw ( KiB/s): min=22304, max=24720, per=27.14%, avg=23361.60, stdev=906.37, samples=10 00:37:55.486 iops : min= 2788, max= 3090, avg=2920.20, stdev=113.30, samples=10 00:37:55.486 lat (usec) : 750=0.01%, 1000=0.47% 00:37:55.486 lat (msec) : 2=2.66%, 4=94.73%, 10=2.14% 00:37:55.486 cpu : usr=95.40%, sys=4.26%, ctx=12, majf=0, minf=9 00:37:55.486 IO depths : 1=0.2%, 2=8.8%, 4=63.0%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.486 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.486 issued rwts: total=14608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.486 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.486 filename0: (groupid=0, jobs=1): err= 0: pid=1369753: Mon Oct 14 17:54:53 2024 00:37:55.486 read: IOPS=2676, BW=20.9MiB/s (21.9MB/s)(105MiB/5001msec) 00:37:55.486 slat (nsec): min=5920, max=46960, avg=8664.59, stdev=3170.36 00:37:55.486 clat (usec): min=866, max=5484, avg=2964.57, stdev=418.23 00:37:55.486 lat (usec): min=873, max=5497, avg=2973.23, stdev=418.12 00:37:55.486 clat percentiles (usec): 00:37:55.486 | 1.00th=[ 1975], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:37:55.486 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:37:55.486 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3687], 00:37:55.486 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 5014], 99.95th=[ 5211], 00:37:55.486 | 99.99th=[ 5473] 00:37:55.486 bw ( KiB/s): min=20848, max=21872, per=24.82%, avg=21363.56, stdev=312.90, samples=9 00:37:55.486 iops : min= 2606, max= 2734, avg=2670.44, stdev=39.11, samples=9 00:37:55.486 lat (usec) : 1000=0.04% 00:37:55.486 lat (msec) : 2=1.02%, 4=96.45%, 10=2.49% 00:37:55.486 cpu : usr=95.62%, sys=4.06%, ctx=7, majf=0, minf=9 00:37:55.486 IO depths : 1=0.2%, 2=2.4%, 4=69.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.486 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.486 issued rwts: total=13384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.486 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.486 filename1: (groupid=0, jobs=1): err= 0: pid=1369754: Mon Oct 14 17:54:53 2024 00:37:55.486 read: IOPS=2585, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:37:55.486 slat (nsec): min=5947, max=38154, avg=8510.42, stdev=3033.07 00:37:55.486 clat (usec): min=974, max=5500, avg=3070.04, stdev=445.79 00:37:55.486 lat (usec): min=980, max=5511, avg=3078.55, stdev=445.64 00:37:55.486 clat percentiles (usec): 00:37:55.486 | 1.00th=[ 2040], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2802], 00:37:55.486 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:37:55.486 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3621], 95.00th=[ 3916], 00:37:55.487 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5407], 00:37:55.487 | 99.99th=[ 5473] 00:37:55.487 bw ( KiB/s): min=19456, max=21296, per=24.03%, avg=20680.00, stdev=613.55, samples=9 00:37:55.487 iops : min= 2432, max= 2662, avg=2585.00, stdev=76.69, samples=9 00:37:55.487 lat (usec) : 1000=0.02% 00:37:55.487 lat (msec) : 2=0.75%, 4=94.96%, 10=4.28% 00:37:55.487 cpu : usr=95.72%, sys=3.96%, ctx=8, majf=0, minf=9 00:37:55.487 IO depths : 1=0.2%, 2=1.7%, 4=70.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.487 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.487 issued rwts: total=12929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.487 filename1: (groupid=0, jobs=1): err= 0: pid=1369755: Mon Oct 14 17:54:53 2024 00:37:55.487 read: IOPS=2579, BW=20.2MiB/s (21.1MB/s)(101MiB/5003msec) 00:37:55.487 slat (nsec): min=5951, max=44099, avg=8482.85, stdev=3096.83 00:37:55.487 clat (usec): min=779, max=5305, avg=3076.03, stdev=422.41 00:37:55.487 lat (usec): min=791, max=5311, avg=3084.51, stdev=422.19 00:37:55.487 clat percentiles (usec): 00:37:55.487 | 1.00th=[ 2147], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2835], 00:37:55.487 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3064], 00:37:55.487 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3851], 00:37:55.487 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5211], 00:37:55.487 | 99.99th=[ 5276] 00:37:55.487 bw ( KiB/s): min=19824, max=21392, per=23.98%, avg=20638.40, stdev=622.58, samples=10 00:37:55.487 iops : min= 2478, max= 2674, avg=2579.80, stdev=77.82, samples=10 00:37:55.487 lat (usec) : 1000=0.02% 00:37:55.487 lat (msec) : 2=0.33%, 4=96.02%, 10=3.63% 00:37:55.487 cpu : usr=95.98%, sys=3.68%, ctx=8, majf=0, minf=9 00:37:55.487 IO depths : 1=0.2%, 2=1.7%, 4=71.6%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:55.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.487 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.487 issued rwts: total=12904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:55.487 00:37:55.487 Run status group 0 (all jobs): 00:37:55.487 READ: bw=84.1MiB/s (88.1MB/s), 20.2MiB/s-22.8MiB/s (21.1MB/s-23.9MB/s), io=421MiB (441MB), run=5001-5003msec 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 00:37:55.487 real 0m24.067s 00:37:55.487 user 4m51.545s 00:37:55.487 sys 0m5.320s 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 ************************************ 00:37:55.487 END TEST fio_dif_rand_params 00:37:55.487 ************************************ 00:37:55.487 17:54:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:55.487 17:54:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:55.487 17:54:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 ************************************ 00:37:55.487 START TEST fio_dif_digest 00:37:55.487 ************************************ 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 bdev_null0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:55.487 [2024-10-14 17:54:54.158643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:55.487 { 00:37:55.487 "params": { 00:37:55.487 "name": "Nvme$subsystem", 00:37:55.487 "trtype": "$TEST_TRANSPORT", 00:37:55.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.487 "adrfam": "ipv4", 00:37:55.487 "trsvcid": "$NVMF_PORT", 00:37:55.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.487 "hdgst": ${hdgst:-false}, 00:37:55.487 "ddgst": ${ddgst:-false} 00:37:55.487 }, 00:37:55.487 "method": "bdev_nvme_attach_controller" 00:37:55.487 } 00:37:55.487 EOF 00:37:55.487 )") 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:55.487 "params": { 00:37:55.487 "name": "Nvme0", 00:37:55.487 "trtype": "tcp", 00:37:55.487 "traddr": "10.0.0.2", 00:37:55.487 "adrfam": "ipv4", 00:37:55.487 "trsvcid": "4420", 00:37:55.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:55.487 "hdgst": true, 00:37:55.487 "ddgst": true 00:37:55.487 }, 00:37:55.487 "method": "bdev_nvme_attach_controller" 00:37:55.487 }' 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:55.487 17:54:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:55.487 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:55.487 ... 00:37:55.487 fio-3.35 00:37:55.487 Starting 3 threads 00:38:07.700 00:38:07.700 filename0: (groupid=0, jobs=1): err= 0: pid=1370814: Mon Oct 14 17:55:05 2024 00:38:07.700 read: IOPS=297, BW=37.1MiB/s (38.9MB/s)(373MiB/10045msec) 00:38:07.700 slat (nsec): min=6252, max=26375, avg=11604.79, stdev=1805.38 00:38:07.700 clat (usec): min=6764, max=50800, avg=10071.43, stdev=1232.92 00:38:07.700 lat (usec): min=6772, max=50813, avg=10083.03, stdev=1232.96 00:38:07.700 clat percentiles (usec): 00:38:07.700 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:38:07.700 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:38:07.700 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:38:07.700 | 99.00th=[11863], 99.50th=[12125], 99.90th=[13042], 99.95th=[45351], 00:38:07.700 | 99.99th=[50594] 00:38:07.700 bw ( KiB/s): min=36864, max=39424, per=35.18%, avg=38169.60, stdev=742.42, samples=20 00:38:07.700 iops : min= 288, max= 308, avg=298.20, stdev= 5.80, samples=20 00:38:07.700 lat (msec) : 10=47.52%, 20=52.41%, 50=0.03%, 100=0.03% 00:38:07.700 cpu : usr=94.32%, sys=5.40%, ctx=20, majf=0, minf=114 00:38:07.700 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:07.700 filename0: (groupid=0, jobs=1): err= 0: pid=1370816: Mon Oct 14 17:55:05 2024 00:38:07.700 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(354MiB/10044msec) 00:38:07.700 slat (usec): min=6, max=1288, avg=11.92, stdev=24.03 00:38:07.700 clat (usec): min=6747, max=48491, avg=10600.41, stdev=1250.58 00:38:07.700 lat (usec): min=6760, max=48503, avg=10612.33, stdev=1250.82 00:38:07.700 clat percentiles (usec): 00:38:07.700 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:38:07.700 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:38:07.700 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:38:07.700 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14877], 99.95th=[44303], 00:38:07.700 | 99.99th=[48497] 00:38:07.700 bw ( KiB/s): min=35584, max=36864, per=33.42%, avg=36262.40, stdev=364.65, samples=20 00:38:07.700 iops : min= 278, max= 288, avg=283.30, stdev= 2.85, samples=20 00:38:07.700 lat (msec) : 10=21.66%, 20=78.27%, 50=0.07% 00:38:07.700 cpu : usr=94.71%, sys=4.99%, ctx=17, majf=0, minf=52 00:38:07.700 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 issued rwts: total=2835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:07.700 filename0: (groupid=0, jobs=1): err= 0: pid=1370817: Mon Oct 14 17:55:05 2024 00:38:07.700 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(337MiB/10045msec) 00:38:07.700 slat (nsec): min=6242, max=24793, avg=11307.85, stdev=1728.30 00:38:07.700 clat (usec): min=8422, max=50913, avg=11148.50, stdev=1812.66 00:38:07.700 lat (usec): min=8435, max=50935, avg=11159.81, stdev=1812.72 00:38:07.700 clat percentiles (usec): 00:38:07.700 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:38:07.700 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:38:07.700 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:38:07.700 | 99.00th=[13042], 99.50th=[13435], 99.90th=[51119], 99.95th=[51119], 00:38:07.700 | 99.99th=[51119] 00:38:07.700 bw ( KiB/s): min=31488, max=35328, per=31.78%, avg=34483.20, stdev=779.59, samples=20 00:38:07.700 iops : min= 246, max= 276, avg=269.40, stdev= 6.09, samples=20 00:38:07.700 lat (msec) : 10=7.05%, 20=92.77%, 50=0.07%, 100=0.11% 00:38:07.700 cpu : usr=94.65%, sys=5.05%, ctx=20, majf=0, minf=92 00:38:07.700 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:07.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:07.700 issued rwts: total=2696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:07.700 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:07.700 00:38:07.700 Run status group 0 (all jobs): 00:38:07.700 READ: bw=106MiB/s (111MB/s), 33.5MiB/s-37.1MiB/s (35.2MB/s-38.9MB/s), io=1064MiB (1116MB), run=10044-10045msec 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.700 00:38:07.700 real 0m11.255s 00:38:07.700 user 0m35.179s 00:38:07.700 sys 0m1.835s 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:07.700 17:55:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:07.700 ************************************ 00:38:07.700 END TEST fio_dif_digest 00:38:07.700 ************************************ 00:38:07.700 17:55:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:07.700 17:55:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:07.700 rmmod nvme_tcp 00:38:07.700 rmmod nvme_fabrics 00:38:07.700 rmmod nvme_keyring 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1362026 ']' 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1362026 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1362026 ']' 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1362026 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1362026 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1362026' 00:38:07.700 killing process with pid 1362026 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1362026 00:38:07.700 17:55:05 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1362026 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:38:07.700 17:55:05 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:09.606 Waiting for block devices as requested 00:38:09.606 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:09.606 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:09.606 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:09.606 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:09.606 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:09.866 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:09.866 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:09.866 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:10.125 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:10.125 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:10.125 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:10.384 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:10.384 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:10.384 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:10.384 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:10.643 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:10.643 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:10.643 17:55:09 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.643 17:55:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:10.643 17:55:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.180 17:55:11 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:13.180 00:38:13.180 real 1m14.037s 00:38:13.180 user 7m8.901s 00:38:13.180 sys 0m20.970s 00:38:13.180 17:55:11 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:13.180 17:55:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:13.180 ************************************ 00:38:13.180 END TEST nvmf_dif 00:38:13.180 ************************************ 00:38:13.180 17:55:11 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:13.180 17:55:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:13.180 17:55:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:13.180 17:55:11 -- common/autotest_common.sh@10 -- # set +x 00:38:13.180 ************************************ 00:38:13.180 START TEST nvmf_abort_qd_sizes 00:38:13.180 ************************************ 00:38:13.180 17:55:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:13.180 * Looking for test storage... 00:38:13.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:13.180 17:55:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:13.180 17:55:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:38:13.180 17:55:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.180 --rc genhtml_branch_coverage=1 00:38:13.180 --rc genhtml_function_coverage=1 00:38:13.180 --rc genhtml_legend=1 00:38:13.180 --rc geninfo_all_blocks=1 00:38:13.180 --rc geninfo_unexecuted_blocks=1 00:38:13.180 00:38:13.180 ' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.180 --rc genhtml_branch_coverage=1 00:38:13.180 --rc genhtml_function_coverage=1 00:38:13.180 --rc genhtml_legend=1 00:38:13.180 --rc geninfo_all_blocks=1 00:38:13.180 --rc geninfo_unexecuted_blocks=1 00:38:13.180 00:38:13.180 ' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.180 --rc genhtml_branch_coverage=1 00:38:13.180 --rc genhtml_function_coverage=1 00:38:13.180 --rc genhtml_legend=1 00:38:13.180 --rc geninfo_all_blocks=1 00:38:13.180 --rc geninfo_unexecuted_blocks=1 00:38:13.180 00:38:13.180 ' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:13.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.180 --rc genhtml_branch_coverage=1 00:38:13.180 --rc genhtml_function_coverage=1 00:38:13.180 --rc genhtml_legend=1 00:38:13.180 --rc geninfo_all_blocks=1 00:38:13.180 --rc geninfo_unexecuted_blocks=1 00:38:13.180 00:38:13.180 ' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.180 17:55:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:13.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:13.181 17:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:19.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:19.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:19.754 Found net devices under 0000:86:00.0: cvl_0_0 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:19.754 Found net devices under 0000:86:00.1: cvl_0_1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:19.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:19.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:38:19.754 00:38:19.754 --- 10.0.0.2 ping statistics --- 00:38:19.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.754 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:19.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:19.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:38:19.754 00:38:19.754 --- 10.0.0.1 ping statistics --- 00:38:19.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.754 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:38:19.754 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:19.755 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:38:19.755 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:38:19.755 17:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:21.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:21.661 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:21.920 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:21.920 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:21.920 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:21.920 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:23.299 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1378824 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1378824 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1378824 ']' 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:23.299 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.299 [2024-10-14 17:55:22.437967] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:38:23.299 [2024-10-14 17:55:22.438015] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.558 [2024-10-14 17:55:22.510400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.558 [2024-10-14 17:55:22.554197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.558 [2024-10-14 17:55:22.554234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.558 [2024-10-14 17:55:22.554241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.558 [2024-10-14 17:55:22.554247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.558 [2024-10-14 17:55:22.554252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.558 [2024-10-14 17:55:22.555706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.559 [2024-10-14 17:55:22.555816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.559 [2024-10-14 17:55:22.555925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.559 [2024-10-14 17:55:22.555926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:23.559 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:23.818 17:55:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.818 ************************************ 00:38:23.818 START TEST spdk_target_abort 00:38:23.818 ************************************ 00:38:23.818 17:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:38:23.818 17:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:23.818 17:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:38:23.818 17:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.818 17:55:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.106 spdk_targetn1 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.106 [2024-10-14 17:55:25.563674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.106 [2024-10-14 17:55:25.617158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:27.106 17:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:30.394 Initializing NVMe Controllers 00:38:30.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:30.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:30.394 Initialization complete. Launching workers. 00:38:30.394 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17052, failed: 0 00:38:30.394 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1415, failed to submit 15637 00:38:30.394 success 728, unsuccessful 687, failed 0 00:38:30.394 17:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:30.394 17:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:33.683 Initializing NVMe Controllers 00:38:33.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:33.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:33.683 Initialization complete. Launching workers. 00:38:33.683 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 0 00:38:33.683 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7280 00:38:33.683 success 340, unsuccessful 898, failed 0 00:38:33.683 17:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:33.683 17:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.974 Initializing NVMe Controllers 00:38:36.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:36.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:36.974 Initialization complete. Launching workers. 00:38:36.974 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38440, failed: 0 00:38:36.974 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2792, failed to submit 35648 00:38:36.974 success 609, unsuccessful 2183, failed 0 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.974 17:55:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1378824 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1378824 ']' 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1378824 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1378824 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1378824' 00:38:38.468 killing process with pid 1378824 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1378824 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1378824 00:38:38.468 00:38:38.468 real 0m14.674s 00:38:38.468 user 0m55.835s 00:38:38.468 sys 0m2.725s 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.468 ************************************ 00:38:38.468 END TEST spdk_target_abort 00:38:38.468 ************************************ 00:38:38.468 17:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:38.468 17:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:38.468 17:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:38.468 17:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:38.468 ************************************ 00:38:38.468 START TEST kernel_target_abort 00:38:38.468 ************************************ 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:38.468 17:55:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:41.036 Waiting for block devices as requested 00:38:41.294 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:41.294 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:41.294 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:41.554 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:41.554 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:41.554 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:41.813 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:41.813 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:41.813 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:42.072 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:42.072 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:42.072 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:42.072 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:42.331 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:42.331 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:42.331 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:42.590 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:42.590 No valid GPT data, bailing 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:42.590 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:42.850 00:38:42.850 Discovery Log Number of Records 2, Generation counter 2 00:38:42.850 =====Discovery Log Entry 0====== 00:38:42.850 trtype: tcp 00:38:42.850 adrfam: ipv4 00:38:42.850 subtype: current discovery subsystem 00:38:42.850 treq: not specified, sq flow control disable supported 00:38:42.850 portid: 1 00:38:42.850 trsvcid: 4420 00:38:42.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:42.850 traddr: 10.0.0.1 00:38:42.850 eflags: none 00:38:42.850 sectype: none 00:38:42.850 =====Discovery Log Entry 1====== 00:38:42.850 trtype: tcp 00:38:42.850 adrfam: ipv4 00:38:42.850 subtype: nvme subsystem 00:38:42.850 treq: not specified, sq flow control disable supported 00:38:42.850 portid: 1 00:38:42.850 trsvcid: 4420 00:38:42.850 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:42.850 traddr: 10.0.0.1 00:38:42.850 eflags: none 00:38:42.850 sectype: none 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:42.850 17:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:46.140 Initializing NVMe Controllers 00:38:46.140 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:46.140 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:46.140 Initialization complete. Launching workers. 00:38:46.140 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96394, failed: 0 00:38:46.140 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 96394, failed to submit 0 00:38:46.140 success 0, unsuccessful 96394, failed 0 00:38:46.140 17:55:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:46.140 17:55:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:49.430 Initializing NVMe Controllers 00:38:49.430 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:49.430 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:49.430 Initialization complete. Launching workers. 00:38:49.430 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151513, failed: 0 00:38:49.430 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38110, failed to submit 113403 00:38:49.430 success 0, unsuccessful 38110, failed 0 00:38:49.430 17:55:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:49.430 17:55:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:51.965 Initializing NVMe Controllers 00:38:51.965 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:51.965 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:51.965 Initialization complete. Launching workers. 00:38:51.965 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142036, failed: 0 00:38:51.965 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35586, failed to submit 106450 00:38:51.965 success 0, unsuccessful 35586, failed 0 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:38:51.965 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:38:52.225 17:55:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:54.761 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:54.761 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:54.761 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:54.761 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:55.021 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:56.399 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:56.399 00:38:56.399 real 0m18.007s 00:38:56.399 user 0m9.043s 00:38:56.399 sys 0m5.139s 00:38:56.399 17:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:56.399 17:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:56.399 ************************************ 00:38:56.399 END TEST kernel_target_abort 00:38:56.399 ************************************ 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.399 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.659 rmmod nvme_tcp 00:38:56.659 rmmod nvme_fabrics 00:38:56.659 rmmod nvme_keyring 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1378824 ']' 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1378824 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1378824 ']' 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1378824 00:38:56.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1378824) - No such process 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1378824 is not found' 00:38:56.659 Process with pid 1378824 is not found 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:38:56.659 17:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:59.195 Waiting for block devices as requested 00:38:59.195 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:59.455 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:59.455 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:59.714 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:59.714 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:59.714 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:59.714 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:59.973 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:59.973 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:59.973 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:00.232 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:00.232 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:00.232 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:00.491 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:00.491 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:00.491 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:00.491 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:00.750 17:55:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:02.655 17:56:01 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:02.655 00:39:02.655 real 0m49.862s 00:39:02.655 user 1m9.217s 00:39:02.655 sys 0m16.543s 00:39:02.655 17:56:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:02.655 17:56:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:02.655 ************************************ 00:39:02.655 END TEST nvmf_abort_qd_sizes 00:39:02.655 ************************************ 00:39:02.655 17:56:01 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:02.655 17:56:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:02.655 17:56:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:02.655 17:56:01 -- common/autotest_common.sh@10 -- # set +x 00:39:02.914 ************************************ 00:39:02.914 START TEST keyring_file 00:39:02.914 ************************************ 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:02.914 * Looking for test storage... 00:39:02.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:02.914 17:56:01 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.914 --rc genhtml_branch_coverage=1 00:39:02.914 --rc genhtml_function_coverage=1 00:39:02.914 --rc genhtml_legend=1 00:39:02.914 --rc geninfo_all_blocks=1 00:39:02.914 --rc geninfo_unexecuted_blocks=1 00:39:02.914 00:39:02.914 ' 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.914 --rc genhtml_branch_coverage=1 00:39:02.914 --rc genhtml_function_coverage=1 00:39:02.914 --rc genhtml_legend=1 00:39:02.914 --rc geninfo_all_blocks=1 00:39:02.914 --rc geninfo_unexecuted_blocks=1 00:39:02.914 00:39:02.914 ' 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.914 --rc genhtml_branch_coverage=1 00:39:02.914 --rc genhtml_function_coverage=1 00:39:02.914 --rc genhtml_legend=1 00:39:02.914 --rc geninfo_all_blocks=1 00:39:02.914 --rc geninfo_unexecuted_blocks=1 00:39:02.914 00:39:02.914 ' 00:39:02.914 17:56:01 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:02.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:02.914 --rc genhtml_branch_coverage=1 00:39:02.914 --rc genhtml_function_coverage=1 00:39:02.914 --rc genhtml_legend=1 00:39:02.914 --rc geninfo_all_blocks=1 00:39:02.914 --rc geninfo_unexecuted_blocks=1 00:39:02.914 00:39:02.914 ' 00:39:02.914 17:56:01 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:02.914 17:56:01 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:02.914 17:56:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:02.914 17:56:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:02.915 17:56:02 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:02.915 17:56:02 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:02.915 17:56:02 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:02.915 17:56:02 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:02.915 17:56:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.915 17:56:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.915 17:56:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.915 17:56:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:02.915 17:56:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:02.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:02.915 17:56:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SaaEihtiAz 00:39:02.915 17:56:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:39:02.915 17:56:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SaaEihtiAz 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SaaEihtiAz 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SaaEihtiAz 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Lwh4AlKOou 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:39:03.173 17:56:02 keyring_file -- nvmf/common.sh@731 -- # python - 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Lwh4AlKOou 00:39:03.173 17:56:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Lwh4AlKOou 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Lwh4AlKOou 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=1387619 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:03.173 17:56:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1387619 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1387619 ']' 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:03.173 17:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:03.173 [2024-10-14 17:56:02.191266] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:39:03.174 [2024-10-14 17:56:02.191314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387619 ] 00:39:03.174 [2024-10-14 17:56:02.259229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.174 [2024-10-14 17:56:02.301161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:03.433 17:56:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:03.433 [2024-10-14 17:56:02.509566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.433 null0 00:39:03.433 [2024-10-14 17:56:02.541624] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:03.433 [2024-10-14 17:56:02.541951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.433 17:56:02 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.433 17:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:03.433 [2024-10-14 17:56:02.569681] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:03.692 request: 00:39:03.692 { 00:39:03.692 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.692 "secure_channel": false, 00:39:03.692 "listen_address": { 00:39:03.692 "trtype": "tcp", 00:39:03.692 "traddr": "127.0.0.1", 00:39:03.692 "trsvcid": "4420" 00:39:03.692 }, 00:39:03.692 "method": "nvmf_subsystem_add_listener", 00:39:03.692 "req_id": 1 00:39:03.692 } 00:39:03.692 Got JSON-RPC error response 00:39:03.692 response: 00:39:03.692 { 00:39:03.692 "code": -32602, 00:39:03.692 "message": "Invalid parameters" 00:39:03.692 } 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:03.692 17:56:02 keyring_file -- keyring/file.sh@47 -- # bperfpid=1387624 00:39:03.692 17:56:02 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:03.692 17:56:02 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1387624 /var/tmp/bperf.sock 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1387624 ']' 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:03.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:03.692 [2024-10-14 17:56:02.620153] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:39:03.692 [2024-10-14 17:56:02.620192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387624 ] 00:39:03.692 [2024-10-14 17:56:02.686587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.692 [2024-10-14 17:56:02.726731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.692 17:56:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:03.692 17:56:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:03.692 17:56:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:03.951 17:56:03 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Lwh4AlKOou 00:39:03.951 17:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Lwh4AlKOou 00:39:04.210 17:56:03 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:04.210 17:56:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:04.210 17:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.210 17:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.210 17:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.470 17:56:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SaaEihtiAz == \/\t\m\p\/\t\m\p\.\S\a\a\E\i\h\t\i\A\z ]] 00:39:04.470 17:56:03 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:04.470 17:56:03 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:04.470 17:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:04.470 17:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.470 17:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.728 17:56:03 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Lwh4AlKOou == \/\t\m\p\/\t\m\p\.\L\w\h\4\A\l\K\O\o\u ]] 00:39:04.728 17:56:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.728 17:56:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:04.728 17:56:03 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:04.728 17:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.987 17:56:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:04.987 17:56:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:04.987 17:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:05.246 [2024-10-14 17:56:04.224639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:05.246 nvme0n1 00:39:05.246 17:56:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:05.246 17:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.246 17:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.246 17:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.246 17:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.246 17:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.503 17:56:04 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:05.503 17:56:04 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:05.503 17:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.503 17:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.503 17:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.503 17:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.503 17:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.762 17:56:04 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:05.762 17:56:04 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:05.762 Running I/O for 1 seconds... 00:39:06.700 19444.00 IOPS, 75.95 MiB/s 00:39:06.700 Latency(us) 00:39:06.700 [2024-10-14T15:56:05.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.700 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:06.700 nvme0n1 : 1.00 19492.92 76.14 0.00 0.00 6555.43 4056.99 11983.73 00:39:06.700 [2024-10-14T15:56:05.838Z] =================================================================================================================== 00:39:06.700 [2024-10-14T15:56:05.838Z] Total : 19492.92 76.14 0.00 0.00 6555.43 4056.99 11983.73 00:39:06.700 { 00:39:06.700 "results": [ 00:39:06.700 { 00:39:06.700 "job": "nvme0n1", 00:39:06.700 "core_mask": "0x2", 00:39:06.700 "workload": "randrw", 00:39:06.700 "percentage": 50, 00:39:06.700 "status": "finished", 00:39:06.700 "queue_depth": 128, 00:39:06.700 "io_size": 4096, 00:39:06.700 "runtime": 1.004057, 00:39:06.700 "iops": 19492.917234778502, 00:39:06.700 "mibps": 76.14420794835353, 00:39:06.700 "io_failed": 0, 00:39:06.700 "io_timeout": 0, 00:39:06.700 "avg_latency_us": 6555.428098060397, 00:39:06.700 "min_latency_us": 4056.9904761904763, 00:39:06.700 "max_latency_us": 11983.725714285714 00:39:06.700 } 00:39:06.700 ], 00:39:06.700 "core_count": 1 00:39:06.700 } 00:39:06.700 17:56:05 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:06.700 17:56:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:06.960 17:56:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:06.960 17:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.960 17:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.960 17:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.960 17:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.960 17:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.218 17:56:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:07.218 17:56:06 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:07.218 17:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:07.218 17:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.218 17:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.218 17:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.218 17:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:07.477 17:56:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:07.477 17:56:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:07.477 17:56:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:07.477 17:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:07.477 [2024-10-14 17:56:06.603539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:07.477 [2024-10-14 17:56:06.603834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1de0 (107): Transport endpoint is not connected 00:39:07.477 [2024-10-14 17:56:06.604827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd1de0 (9): Bad file descriptor 00:39:07.477 [2024-10-14 17:56:06.605827] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:07.477 [2024-10-14 17:56:06.605836] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:07.477 [2024-10-14 17:56:06.605843] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:07.477 [2024-10-14 17:56:06.605851] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:07.477 request: 00:39:07.477 { 00:39:07.477 "name": "nvme0", 00:39:07.477 "trtype": "tcp", 00:39:07.477 "traddr": "127.0.0.1", 00:39:07.477 "adrfam": "ipv4", 00:39:07.477 "trsvcid": "4420", 00:39:07.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.477 "prchk_reftag": false, 00:39:07.477 "prchk_guard": false, 00:39:07.477 "hdgst": false, 00:39:07.477 "ddgst": false, 00:39:07.477 "psk": "key1", 00:39:07.477 "allow_unrecognized_csi": false, 00:39:07.477 "method": "bdev_nvme_attach_controller", 00:39:07.477 "req_id": 1 00:39:07.477 } 00:39:07.477 Got JSON-RPC error response 00:39:07.477 response: 00:39:07.477 { 00:39:07.477 "code": -5, 00:39:07.477 "message": "Input/output error" 00:39:07.477 } 00:39:07.737 17:56:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:07.737 17:56:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:07.737 17:56:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:07.737 17:56:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:07.737 17:56:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.737 17:56:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:07.737 17:56:06 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:07.737 17:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.997 17:56:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:07.997 17:56:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.997 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:08.255 17:56:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:08.255 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:08.255 17:56:07 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:08.255 17:56:07 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:08.255 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:08.514 17:56:07 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:08.514 17:56:07 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.SaaEihtiAz 00:39:08.514 17:56:07 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:08.514 17:56:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:08.515 17:56:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:08.515 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:08.774 [2024-10-14 17:56:07.781942] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SaaEihtiAz': 0100660 00:39:08.774 [2024-10-14 17:56:07.781967] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:08.774 request: 00:39:08.774 { 00:39:08.774 "name": "key0", 00:39:08.774 "path": "/tmp/tmp.SaaEihtiAz", 00:39:08.774 "method": "keyring_file_add_key", 00:39:08.774 "req_id": 1 00:39:08.774 } 00:39:08.774 Got JSON-RPC error response 00:39:08.774 response: 00:39:08.774 { 00:39:08.774 "code": -1, 00:39:08.774 "message": "Operation not permitted" 00:39:08.774 } 00:39:08.774 17:56:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:08.774 17:56:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:08.774 17:56:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:08.774 17:56:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:08.774 17:56:07 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.SaaEihtiAz 00:39:08.774 17:56:07 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:08.774 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SaaEihtiAz 00:39:09.033 17:56:07 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.SaaEihtiAz 00:39:09.033 17:56:07 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:09.033 17:56:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:09.033 17:56:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:09.033 17:56:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.033 17:56:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:09.033 17:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.292 17:56:08 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:09.292 17:56:08 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:09.292 17:56:08 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.292 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.292 [2024-10-14 17:56:08.371498] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SaaEihtiAz': No such file or directory 00:39:09.292 [2024-10-14 17:56:08.371518] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:09.292 [2024-10-14 17:56:08.371533] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:09.292 [2024-10-14 17:56:08.371540] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:09.292 [2024-10-14 17:56:08.371546] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:09.292 [2024-10-14 17:56:08.371552] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:09.292 request: 00:39:09.292 { 00:39:09.292 "name": "nvme0", 00:39:09.292 "trtype": "tcp", 00:39:09.292 "traddr": "127.0.0.1", 00:39:09.292 "adrfam": "ipv4", 00:39:09.292 "trsvcid": "4420", 00:39:09.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.292 "prchk_reftag": false, 00:39:09.292 "prchk_guard": false, 00:39:09.292 "hdgst": false, 00:39:09.292 "ddgst": false, 00:39:09.292 "psk": "key0", 00:39:09.292 "allow_unrecognized_csi": false, 00:39:09.292 "method": "bdev_nvme_attach_controller", 00:39:09.293 "req_id": 1 00:39:09.293 } 00:39:09.293 Got JSON-RPC error response 00:39:09.293 response: 00:39:09.293 { 00:39:09.293 "code": -19, 00:39:09.293 "message": "No such device" 00:39:09.293 } 00:39:09.293 17:56:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:09.293 17:56:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:09.293 17:56:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:09.293 17:56:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:09.293 17:56:08 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:09.293 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:09.552 17:56:08 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AbMXKSRhgZ 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:39:09.552 17:56:08 keyring_file -- nvmf/common.sh@731 -- # python - 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AbMXKSRhgZ 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AbMXKSRhgZ 00:39:09.552 17:56:08 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.AbMXKSRhgZ 00:39:09.552 17:56:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AbMXKSRhgZ 00:39:09.552 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AbMXKSRhgZ 00:39:09.810 17:56:08 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:09.810 17:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:10.069 nvme0n1 00:39:10.069 17:56:09 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:10.069 17:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.069 17:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.069 17:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.069 17:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.069 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.328 17:56:09 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:10.328 17:56:09 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:10.328 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:10.328 17:56:09 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:10.328 17:56:09 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:10.328 17:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.328 17:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.328 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.587 17:56:09 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:10.587 17:56:09 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:10.587 17:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.587 17:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.587 17:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.587 17:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.587 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.845 17:56:09 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:10.845 17:56:09 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:10.845 17:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:11.103 17:56:10 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:11.103 17:56:10 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:11.103 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.103 17:56:10 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:11.104 17:56:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AbMXKSRhgZ 00:39:11.104 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AbMXKSRhgZ 00:39:11.362 17:56:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Lwh4AlKOou 00:39:11.363 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Lwh4AlKOou 00:39:11.621 17:56:10 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.621 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.880 nvme0n1 00:39:11.880 17:56:10 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:11.880 17:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:12.139 17:56:11 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:12.139 "subsystems": [ 00:39:12.139 { 00:39:12.139 "subsystem": "keyring", 00:39:12.139 "config": [ 00:39:12.139 { 00:39:12.139 "method": "keyring_file_add_key", 00:39:12.139 "params": { 00:39:12.139 "name": "key0", 00:39:12.139 "path": "/tmp/tmp.AbMXKSRhgZ" 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "keyring_file_add_key", 00:39:12.139 "params": { 00:39:12.139 "name": "key1", 00:39:12.139 "path": "/tmp/tmp.Lwh4AlKOou" 00:39:12.139 } 00:39:12.139 } 00:39:12.139 ] 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "subsystem": "iobuf", 00:39:12.139 "config": [ 00:39:12.139 { 00:39:12.139 "method": "iobuf_set_options", 00:39:12.139 "params": { 00:39:12.139 "small_pool_count": 8192, 00:39:12.139 "large_pool_count": 1024, 00:39:12.139 "small_bufsize": 8192, 00:39:12.139 "large_bufsize": 135168 00:39:12.139 } 00:39:12.139 } 00:39:12.139 ] 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "subsystem": "sock", 00:39:12.139 "config": [ 00:39:12.139 { 00:39:12.139 "method": "sock_set_default_impl", 00:39:12.139 "params": { 00:39:12.139 "impl_name": "posix" 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "sock_impl_set_options", 00:39:12.139 "params": { 00:39:12.139 "impl_name": "ssl", 00:39:12.139 "recv_buf_size": 4096, 00:39:12.139 "send_buf_size": 4096, 00:39:12.139 "enable_recv_pipe": true, 00:39:12.139 "enable_quickack": false, 00:39:12.139 "enable_placement_id": 0, 00:39:12.139 "enable_zerocopy_send_server": true, 00:39:12.139 "enable_zerocopy_send_client": false, 00:39:12.139 "zerocopy_threshold": 0, 00:39:12.139 "tls_version": 0, 00:39:12.139 "enable_ktls": false 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "sock_impl_set_options", 00:39:12.139 "params": { 00:39:12.139 "impl_name": "posix", 00:39:12.139 "recv_buf_size": 2097152, 00:39:12.139 "send_buf_size": 2097152, 00:39:12.139 "enable_recv_pipe": true, 00:39:12.139 "enable_quickack": false, 00:39:12.139 "enable_placement_id": 0, 00:39:12.139 "enable_zerocopy_send_server": true, 00:39:12.139 "enable_zerocopy_send_client": false, 00:39:12.139 "zerocopy_threshold": 0, 00:39:12.139 "tls_version": 0, 00:39:12.139 "enable_ktls": false 00:39:12.139 } 00:39:12.139 } 00:39:12.139 ] 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "subsystem": "vmd", 00:39:12.139 "config": [] 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "subsystem": "accel", 00:39:12.139 "config": [ 00:39:12.139 { 00:39:12.139 "method": "accel_set_options", 00:39:12.139 "params": { 00:39:12.139 "small_cache_size": 128, 00:39:12.139 "large_cache_size": 16, 00:39:12.139 "task_count": 2048, 00:39:12.139 "sequence_count": 2048, 00:39:12.139 "buf_count": 2048 00:39:12.139 } 00:39:12.139 } 00:39:12.139 ] 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "subsystem": "bdev", 00:39:12.139 "config": [ 00:39:12.139 { 00:39:12.139 "method": "bdev_set_options", 00:39:12.139 "params": { 00:39:12.139 "bdev_io_pool_size": 65535, 00:39:12.139 "bdev_io_cache_size": 256, 00:39:12.139 "bdev_auto_examine": true, 00:39:12.139 "iobuf_small_cache_size": 128, 00:39:12.139 "iobuf_large_cache_size": 16 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "bdev_raid_set_options", 00:39:12.139 "params": { 00:39:12.139 "process_window_size_kb": 1024, 00:39:12.139 "process_max_bandwidth_mb_sec": 0 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "bdev_iscsi_set_options", 00:39:12.139 "params": { 00:39:12.139 "timeout_sec": 30 00:39:12.139 } 00:39:12.139 }, 00:39:12.139 { 00:39:12.139 "method": "bdev_nvme_set_options", 00:39:12.139 "params": { 00:39:12.139 "action_on_timeout": "none", 00:39:12.139 "timeout_us": 0, 00:39:12.139 "timeout_admin_us": 0, 00:39:12.139 "keep_alive_timeout_ms": 10000, 00:39:12.139 "arbitration_burst": 0, 00:39:12.139 "low_priority_weight": 0, 00:39:12.139 "medium_priority_weight": 0, 00:39:12.139 "high_priority_weight": 0, 00:39:12.139 "nvme_adminq_poll_period_us": 10000, 00:39:12.139 "nvme_ioq_poll_period_us": 0, 00:39:12.140 "io_queue_requests": 512, 00:39:12.140 "delay_cmd_submit": true, 00:39:12.140 "transport_retry_count": 4, 00:39:12.140 "bdev_retry_count": 3, 00:39:12.140 "transport_ack_timeout": 0, 00:39:12.140 "ctrlr_loss_timeout_sec": 0, 00:39:12.140 "reconnect_delay_sec": 0, 00:39:12.140 "fast_io_fail_timeout_sec": 0, 00:39:12.140 "disable_auto_failback": false, 00:39:12.140 "generate_uuids": false, 00:39:12.140 "transport_tos": 0, 00:39:12.140 "nvme_error_stat": false, 00:39:12.140 "rdma_srq_size": 0, 00:39:12.140 "io_path_stat": false, 00:39:12.140 "allow_accel_sequence": false, 00:39:12.140 "rdma_max_cq_size": 0, 00:39:12.140 "rdma_cm_event_timeout_ms": 0, 00:39:12.140 "dhchap_digests": [ 00:39:12.140 "sha256", 00:39:12.140 "sha384", 00:39:12.140 "sha512" 00:39:12.140 ], 00:39:12.140 "dhchap_dhgroups": [ 00:39:12.140 "null", 00:39:12.140 "ffdhe2048", 00:39:12.140 "ffdhe3072", 00:39:12.140 "ffdhe4096", 00:39:12.140 "ffdhe6144", 00:39:12.140 "ffdhe8192" 00:39:12.140 ] 00:39:12.140 } 00:39:12.140 }, 00:39:12.140 { 00:39:12.140 "method": "bdev_nvme_attach_controller", 00:39:12.140 "params": { 00:39:12.140 "name": "nvme0", 00:39:12.140 "trtype": "TCP", 00:39:12.140 "adrfam": "IPv4", 00:39:12.140 "traddr": "127.0.0.1", 00:39:12.140 "trsvcid": "4420", 00:39:12.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.140 "prchk_reftag": false, 00:39:12.140 "prchk_guard": false, 00:39:12.140 "ctrlr_loss_timeout_sec": 0, 00:39:12.140 "reconnect_delay_sec": 0, 00:39:12.140 "fast_io_fail_timeout_sec": 0, 00:39:12.140 "psk": "key0", 00:39:12.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.140 "hdgst": false, 00:39:12.140 "ddgst": false, 00:39:12.140 "multipath": "multipath" 00:39:12.140 } 00:39:12.140 }, 00:39:12.140 { 00:39:12.140 "method": "bdev_nvme_set_hotplug", 00:39:12.140 "params": { 00:39:12.140 "period_us": 100000, 00:39:12.140 "enable": false 00:39:12.140 } 00:39:12.140 }, 00:39:12.140 { 00:39:12.140 "method": "bdev_wait_for_examine" 00:39:12.140 } 00:39:12.140 ] 00:39:12.140 }, 00:39:12.140 { 00:39:12.140 "subsystem": "nbd", 00:39:12.140 "config": [] 00:39:12.140 } 00:39:12.140 ] 00:39:12.140 }' 00:39:12.140 17:56:11 keyring_file -- keyring/file.sh@115 -- # killprocess 1387624 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1387624 ']' 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1387624 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387624 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387624' 00:39:12.140 killing process with pid 1387624 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@969 -- # kill 1387624 00:39:12.140 Received shutdown signal, test time was about 1.000000 seconds 00:39:12.140 00:39:12.140 Latency(us) 00:39:12.140 [2024-10-14T15:56:11.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.140 [2024-10-14T15:56:11.278Z] =================================================================================================================== 00:39:12.140 [2024-10-14T15:56:11.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:12.140 17:56:11 keyring_file -- common/autotest_common.sh@974 -- # wait 1387624 00:39:12.399 17:56:11 keyring_file -- keyring/file.sh@118 -- # bperfpid=1389142 00:39:12.399 17:56:11 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1389142 /var/tmp/bperf.sock 00:39:12.399 17:56:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1389142 ']' 00:39:12.399 17:56:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:12.399 17:56:11 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:12.399 17:56:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:12.399 17:56:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:12.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:12.399 17:56:11 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:12.399 "subsystems": [ 00:39:12.399 { 00:39:12.399 "subsystem": "keyring", 00:39:12.399 "config": [ 00:39:12.399 { 00:39:12.399 "method": "keyring_file_add_key", 00:39:12.399 "params": { 00:39:12.399 "name": "key0", 00:39:12.399 "path": "/tmp/tmp.AbMXKSRhgZ" 00:39:12.399 } 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "method": "keyring_file_add_key", 00:39:12.399 "params": { 00:39:12.399 "name": "key1", 00:39:12.399 "path": "/tmp/tmp.Lwh4AlKOou" 00:39:12.399 } 00:39:12.399 } 00:39:12.399 ] 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "subsystem": "iobuf", 00:39:12.399 "config": [ 00:39:12.399 { 00:39:12.399 "method": "iobuf_set_options", 00:39:12.399 "params": { 00:39:12.399 "small_pool_count": 8192, 00:39:12.399 "large_pool_count": 1024, 00:39:12.399 "small_bufsize": 8192, 00:39:12.399 "large_bufsize": 135168 00:39:12.399 } 00:39:12.399 } 00:39:12.399 ] 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "subsystem": "sock", 00:39:12.399 "config": [ 00:39:12.399 { 00:39:12.399 "method": "sock_set_default_impl", 00:39:12.399 "params": { 00:39:12.399 "impl_name": "posix" 00:39:12.399 } 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "method": "sock_impl_set_options", 00:39:12.399 "params": { 00:39:12.399 "impl_name": "ssl", 00:39:12.399 "recv_buf_size": 4096, 00:39:12.399 "send_buf_size": 4096, 00:39:12.399 "enable_recv_pipe": true, 00:39:12.399 "enable_quickack": false, 00:39:12.399 "enable_placement_id": 0, 00:39:12.399 "enable_zerocopy_send_server": true, 00:39:12.399 "enable_zerocopy_send_client": false, 00:39:12.399 "zerocopy_threshold": 0, 00:39:12.399 "tls_version": 0, 00:39:12.399 "enable_ktls": false 00:39:12.399 } 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "method": "sock_impl_set_options", 00:39:12.399 "params": { 00:39:12.399 "impl_name": "posix", 00:39:12.399 "recv_buf_size": 2097152, 00:39:12.399 "send_buf_size": 2097152, 00:39:12.399 "enable_recv_pipe": true, 00:39:12.399 "enable_quickack": false, 00:39:12.399 "enable_placement_id": 0, 00:39:12.399 "enable_zerocopy_send_server": true, 00:39:12.399 "enable_zerocopy_send_client": false, 00:39:12.399 "zerocopy_threshold": 0, 00:39:12.399 "tls_version": 0, 00:39:12.399 "enable_ktls": false 00:39:12.399 } 00:39:12.399 } 00:39:12.399 ] 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "subsystem": "vmd", 00:39:12.399 "config": [] 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "subsystem": "accel", 00:39:12.399 "config": [ 00:39:12.399 { 00:39:12.399 "method": "accel_set_options", 00:39:12.399 "params": { 00:39:12.399 "small_cache_size": 128, 00:39:12.399 "large_cache_size": 16, 00:39:12.399 "task_count": 2048, 00:39:12.399 "sequence_count": 2048, 00:39:12.399 "buf_count": 2048 00:39:12.399 } 00:39:12.399 } 00:39:12.399 ] 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "subsystem": "bdev", 00:39:12.399 "config": [ 00:39:12.399 { 00:39:12.399 "method": "bdev_set_options", 00:39:12.399 "params": { 00:39:12.399 "bdev_io_pool_size": 65535, 00:39:12.399 "bdev_io_cache_size": 256, 00:39:12.399 "bdev_auto_examine": true, 00:39:12.399 "iobuf_small_cache_size": 128, 00:39:12.399 "iobuf_large_cache_size": 16 00:39:12.399 } 00:39:12.399 }, 00:39:12.399 { 00:39:12.399 "method": "bdev_raid_set_options", 00:39:12.399 "params": { 00:39:12.399 "process_window_size_kb": 1024, 00:39:12.400 "process_max_bandwidth_mb_sec": 0 00:39:12.400 } 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "method": "bdev_iscsi_set_options", 00:39:12.400 "params": { 00:39:12.400 "timeout_sec": 30 00:39:12.400 } 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "method": "bdev_nvme_set_options", 00:39:12.400 "params": { 00:39:12.400 "action_on_timeout": "none", 00:39:12.400 "timeout_us": 0, 00:39:12.400 "timeout_admin_us": 0, 00:39:12.400 "keep_alive_timeout_ms": 10000, 00:39:12.400 "arbitration_burst": 0, 00:39:12.400 "low_priority_weight": 0, 00:39:12.400 "medium_priority_weight": 0, 00:39:12.400 "high_priority_weight": 0, 00:39:12.400 "nvme_adminq_poll_period_us": 10000, 00:39:12.400 "nvme_ioq_poll_period_us": 0, 00:39:12.400 "io_queue_requests": 512, 00:39:12.400 "delay_cmd_submit": true, 00:39:12.400 "transport_retry_count": 4, 00:39:12.400 "bdev_retry_count": 3, 00:39:12.400 "transport_ack_timeout": 0, 00:39:12.400 "ctrlr_loss_timeout_sec": 0, 00:39:12.400 "reconnect_delay_sec": 0, 00:39:12.400 "fast_io_fail_timeout_sec": 0, 00:39:12.400 "disable_auto_failback": false, 00:39:12.400 "generate_uuids": false, 00:39:12.400 "transport_tos": 0, 00:39:12.400 "nvme_error_stat": false, 00:39:12.400 "rdma_srq_size": 0, 00:39:12.400 "io_path_stat": false, 00:39:12.400 "allow_accel_sequence": false, 00:39:12.400 "rdma_max_cq_size": 0, 00:39:12.400 "rdma_cm_event_timeout_ms": 0, 00:39:12.400 "dhchap_digests": [ 00:39:12.400 "sha256", 00:39:12.400 "sha384", 00:39:12.400 "sha512" 00:39:12.400 ], 00:39:12.400 "dhchap_dhgroups": [ 00:39:12.400 "null", 00:39:12.400 "ffdhe2048", 00:39:12.400 "ffdhe3072", 00:39:12.400 "ffdhe4096", 00:39:12.400 "ffdhe6144", 00:39:12.400 "ffdhe8192" 00:39:12.400 ] 00:39:12.400 } 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "method": "bdev_nvme_attach_controller", 00:39:12.400 "params": { 00:39:12.400 "name": "nvme0", 00:39:12.400 "trtype": "TCP", 00:39:12.400 "adrfam": "IPv4", 00:39:12.400 "traddr": "127.0.0.1", 00:39:12.400 "trsvcid": "4420", 00:39:12.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:12.400 "prchk_reftag": false, 00:39:12.400 "prchk_guard": false, 00:39:12.400 "ctrlr_loss_timeout_sec": 0, 00:39:12.400 "reconnect_delay_sec": 0, 00:39:12.400 "fast_io_fail_timeout_sec": 0, 00:39:12.400 "psk": "key0", 00:39:12.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:12.400 "hdgst": false, 00:39:12.400 "ddgst": false, 00:39:12.400 "multipath": "multipath" 00:39:12.400 } 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "method": "bdev_nvme_set_hotplug", 00:39:12.400 "params": { 00:39:12.400 "period_us": 100000, 00:39:12.400 "enable": false 00:39:12.400 } 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "method": "bdev_wait_for_examine" 00:39:12.400 } 00:39:12.400 ] 00:39:12.400 }, 00:39:12.400 { 00:39:12.400 "subsystem": "nbd", 00:39:12.400 "config": [] 00:39:12.400 } 00:39:12.400 ] 00:39:12.400 }' 00:39:12.400 17:56:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:12.400 17:56:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:12.400 [2024-10-14 17:56:11.374113] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:39:12.400 [2024-10-14 17:56:11.374160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389142 ] 00:39:12.400 [2024-10-14 17:56:11.442053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.400 [2024-10-14 17:56:11.483899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.658 [2024-10-14 17:56:11.643193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:13.225 17:56:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:13.225 17:56:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:39:13.225 17:56:12 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:13.225 17:56:12 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:13.225 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.484 17:56:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:13.484 17:56:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.484 17:56:12 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:13.484 17:56:12 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:13.484 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.743 17:56:12 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:13.743 17:56:12 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:13.743 17:56:12 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:13.743 17:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:14.015 17:56:12 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:14.015 17:56:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:14.015 17:56:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.AbMXKSRhgZ /tmp/tmp.Lwh4AlKOou 00:39:14.015 17:56:12 keyring_file -- keyring/file.sh@20 -- # killprocess 1389142 00:39:14.015 17:56:12 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1389142 ']' 00:39:14.015 17:56:12 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1389142 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1389142 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1389142' 00:39:14.015 killing process with pid 1389142 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@969 -- # kill 1389142 00:39:14.015 Received shutdown signal, test time was about 1.000000 seconds 00:39:14.015 00:39:14.015 Latency(us) 00:39:14.015 [2024-10-14T15:56:13.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.015 [2024-10-14T15:56:13.153Z] =================================================================================================================== 00:39:14.015 [2024-10-14T15:56:13.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:14.015 17:56:13 keyring_file -- common/autotest_common.sh@974 -- # wait 1389142 00:39:14.273 17:56:13 keyring_file -- keyring/file.sh@21 -- # killprocess 1387619 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1387619 ']' 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1387619 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@955 -- # uname 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387619 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387619' 00:39:14.273 killing process with pid 1387619 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@969 -- # kill 1387619 00:39:14.273 17:56:13 keyring_file -- common/autotest_common.sh@974 -- # wait 1387619 00:39:14.531 00:39:14.531 real 0m11.731s 00:39:14.531 user 0m29.173s 00:39:14.531 sys 0m2.704s 00:39:14.531 17:56:13 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:14.531 17:56:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:14.531 ************************************ 00:39:14.531 END TEST keyring_file 00:39:14.531 ************************************ 00:39:14.531 17:56:13 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:39:14.531 17:56:13 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:14.531 17:56:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:14.531 17:56:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:14.531 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:39:14.531 ************************************ 00:39:14.531 START TEST keyring_linux 00:39:14.531 ************************************ 00:39:14.531 17:56:13 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:14.531 Joined session keyring: 208063991 00:39:14.790 * Looking for test storage... 00:39:14.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.790 17:56:13 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.790 --rc genhtml_branch_coverage=1 00:39:14.790 --rc genhtml_function_coverage=1 00:39:14.790 --rc genhtml_legend=1 00:39:14.790 --rc geninfo_all_blocks=1 00:39:14.790 --rc geninfo_unexecuted_blocks=1 00:39:14.790 00:39:14.790 ' 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.790 --rc genhtml_branch_coverage=1 00:39:14.790 --rc genhtml_function_coverage=1 00:39:14.790 --rc genhtml_legend=1 00:39:14.790 --rc geninfo_all_blocks=1 00:39:14.790 --rc geninfo_unexecuted_blocks=1 00:39:14.790 00:39:14.790 ' 00:39:14.790 17:56:13 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.791 --rc genhtml_branch_coverage=1 00:39:14.791 --rc genhtml_function_coverage=1 00:39:14.791 --rc genhtml_legend=1 00:39:14.791 --rc geninfo_all_blocks=1 00:39:14.791 --rc geninfo_unexecuted_blocks=1 00:39:14.791 00:39:14.791 ' 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.791 --rc genhtml_branch_coverage=1 00:39:14.791 --rc genhtml_function_coverage=1 00:39:14.791 --rc genhtml_legend=1 00:39:14.791 --rc geninfo_all_blocks=1 00:39:14.791 --rc geninfo_unexecuted_blocks=1 00:39:14.791 00:39:14.791 ' 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.791 17:56:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.791 17:56:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.791 17:56:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.791 17:56:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.791 17:56:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.791 17:56:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.791 17:56:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.791 17:56:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:14.791 17:56:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:14.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@731 -- # python - 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:14.791 /tmp/:spdk-test:key0 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:39:14.791 17:56:13 keyring_linux -- nvmf/common.sh@731 -- # python - 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:14.791 17:56:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:14.791 /tmp/:spdk-test:key1 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1389693 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1389693 00:39:14.791 17:56:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1389693 ']' 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:14.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:14.791 17:56:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:15.050 [2024-10-14 17:56:13.963298] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:39:15.050 [2024-10-14 17:56:13.963347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389693 ] 00:39:15.050 [2024-10-14 17:56:14.029394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.050 [2024-10-14 17:56:14.071101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:15.308 [2024-10-14 17:56:14.281543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:15.308 null0 00:39:15.308 [2024-10-14 17:56:14.313598] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:15.308 [2024-10-14 17:56:14.313971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:15.308 642313379 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:15.308 990024256 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1389706 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1389706 /var/tmp/bperf.sock 00:39:15.308 17:56:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1389706 ']' 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:15.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:15.308 17:56:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:15.308 [2024-10-14 17:56:14.382131] Starting SPDK v25.01-pre git sha1 2a72c3069 / DPDK 24.03.0 initialization... 00:39:15.308 [2024-10-14 17:56:14.382170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389706 ] 00:39:15.566 [2024-10-14 17:56:14.449330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.566 [2024-10-14 17:56:14.489251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:15.566 17:56:14 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:15.566 17:56:14 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:39:15.566 17:56:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:15.566 17:56:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:15.825 17:56:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:15.825 17:56:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:16.083 17:56:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:16.083 17:56:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:16.083 [2024-10-14 17:56:15.149712] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:16.083 nvme0n1 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:16.341 17:56:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:16.341 17:56:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:16.341 17:56:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.341 17:56:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:16.341 17:56:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@25 -- # sn=642313379 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 642313379 == \6\4\2\3\1\3\3\7\9 ]] 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 642313379 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:16.600 17:56:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:16.600 Running I/O for 1 seconds... 00:39:17.977 21890.00 IOPS, 85.51 MiB/s 00:39:17.977 Latency(us) 00:39:17.977 [2024-10-14T15:56:17.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.977 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:17.977 nvme0n1 : 1.01 21891.03 85.51 0.00 0.00 5827.90 3729.31 8738.13 00:39:17.977 [2024-10-14T15:56:17.115Z] =================================================================================================================== 00:39:17.977 [2024-10-14T15:56:17.115Z] Total : 21891.03 85.51 0.00 0.00 5827.90 3729.31 8738.13 00:39:17.977 { 00:39:17.977 "results": [ 00:39:17.977 { 00:39:17.977 "job": "nvme0n1", 00:39:17.977 "core_mask": "0x2", 00:39:17.977 "workload": "randread", 00:39:17.977 "status": "finished", 00:39:17.977 "queue_depth": 128, 00:39:17.977 "io_size": 4096, 00:39:17.977 "runtime": 1.0058, 00:39:17.977 "iops": 21891.032014316963, 00:39:17.977 "mibps": 85.51184380592564, 00:39:17.977 "io_failed": 0, 00:39:17.977 "io_timeout": 0, 00:39:17.977 "avg_latency_us": 5827.897065690841, 00:39:17.977 "min_latency_us": 3729.310476190476, 00:39:17.977 "max_latency_us": 8738.133333333333 00:39:17.977 } 00:39:17.977 ], 00:39:17.977 "core_count": 1 00:39:17.977 } 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:17.977 17:56:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:17.977 17:56:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:17.977 17:56:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.236 17:56:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:18.237 17:56:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:18.237 [2024-10-14 17:56:17.322164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:18.237 [2024-10-14 17:56:17.322792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaeb50 (107): Transport endpoint is not connected 00:39:18.237 [2024-10-14 17:56:17.323786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaeb50 (9): Bad file descriptor 00:39:18.237 [2024-10-14 17:56:17.324788] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:18.237 [2024-10-14 17:56:17.324797] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:18.237 [2024-10-14 17:56:17.324804] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:18.237 [2024-10-14 17:56:17.324812] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:18.237 request: 00:39:18.237 { 00:39:18.237 "name": "nvme0", 00:39:18.237 "trtype": "tcp", 00:39:18.237 "traddr": "127.0.0.1", 00:39:18.237 "adrfam": "ipv4", 00:39:18.237 "trsvcid": "4420", 00:39:18.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:18.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:18.237 "prchk_reftag": false, 00:39:18.237 "prchk_guard": false, 00:39:18.237 "hdgst": false, 00:39:18.237 "ddgst": false, 00:39:18.237 "psk": ":spdk-test:key1", 00:39:18.237 "allow_unrecognized_csi": false, 00:39:18.237 "method": "bdev_nvme_attach_controller", 00:39:18.237 "req_id": 1 00:39:18.237 } 00:39:18.237 Got JSON-RPC error response 00:39:18.237 response: 00:39:18.237 { 00:39:18.237 "code": -5, 00:39:18.237 "message": "Input/output error" 00:39:18.237 } 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@33 -- # sn=642313379 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 642313379 00:39:18.237 1 links removed 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@33 -- # sn=990024256 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 990024256 00:39:18.237 1 links removed 00:39:18.237 17:56:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1389706 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1389706 ']' 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1389706 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:18.237 17:56:17 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1389706 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1389706' 00:39:18.496 killing process with pid 1389706 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@969 -- # kill 1389706 00:39:18.496 Received shutdown signal, test time was about 1.000000 seconds 00:39:18.496 00:39:18.496 Latency(us) 00:39:18.496 [2024-10-14T15:56:17.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.496 [2024-10-14T15:56:17.634Z] =================================================================================================================== 00:39:18.496 [2024-10-14T15:56:17.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@974 -- # wait 1389706 00:39:18.496 17:56:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1389693 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1389693 ']' 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1389693 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1389693 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1389693' 00:39:18.496 killing process with pid 1389693 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@969 -- # kill 1389693 00:39:18.496 17:56:17 keyring_linux -- common/autotest_common.sh@974 -- # wait 1389693 00:39:19.063 00:39:19.063 real 0m4.292s 00:39:19.063 user 0m8.073s 00:39:19.063 sys 0m1.455s 00:39:19.063 17:56:17 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:19.063 17:56:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:19.063 ************************************ 00:39:19.063 END TEST keyring_linux 00:39:19.063 ************************************ 00:39:19.063 17:56:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:19.063 17:56:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:19.063 17:56:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:19.063 17:56:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:19.063 17:56:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:19.063 17:56:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:19.063 17:56:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:19.063 17:56:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:19.063 17:56:17 -- common/autotest_common.sh@10 -- # set +x 00:39:19.063 17:56:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:19.063 17:56:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:19.063 17:56:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:19.063 17:56:17 -- common/autotest_common.sh@10 -- # set +x 00:39:24.331 INFO: APP EXITING 00:39:24.331 INFO: killing all VMs 00:39:24.331 INFO: killing vhost app 00:39:24.331 INFO: EXIT DONE 00:39:26.870 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:39:26.870 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:26.870 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:30.271 Cleaning 00:39:30.271 Removing: /var/run/dpdk/spdk0/config 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:30.271 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:30.271 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:30.271 Removing: /var/run/dpdk/spdk1/config 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:30.271 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:30.271 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:30.271 Removing: /var/run/dpdk/spdk2/config 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:30.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:30.272 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:30.272 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:30.272 Removing: /var/run/dpdk/spdk3/config 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:30.272 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:30.272 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:30.272 Removing: /var/run/dpdk/spdk4/config 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:30.272 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:30.272 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:30.272 Removing: /dev/shm/bdev_svc_trace.1 00:39:30.272 Removing: /dev/shm/nvmf_trace.0 00:39:30.272 Removing: /dev/shm/spdk_tgt_trace.pid917124 00:39:30.272 Removing: /var/run/dpdk/spdk0 00:39:30.272 Removing: /var/run/dpdk/spdk1 00:39:30.272 Removing: /var/run/dpdk/spdk2 00:39:30.272 Removing: /var/run/dpdk/spdk3 00:39:30.272 Removing: /var/run/dpdk/spdk4 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1007812 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1011876 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1058289 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1063473 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1069346 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1075270 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1075272 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1076183 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1077000 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1077799 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1078482 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1078486 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1078721 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1078755 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1078891 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1079663 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1080571 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1081488 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1082116 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1082177 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1082405 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1083428 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1084410 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1093093 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1121397 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1126029 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1128022 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1129870 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1129890 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1130125 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1130232 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1130667 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1132480 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1133241 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1133743 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1135854 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1136333 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1137052 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1141125 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1146514 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1146516 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1146517 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1150418 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1158841 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1162658 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1168647 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1169876 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1171731 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1173099 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1177789 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1181717 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1189262 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1189271 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1193801 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1194057 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1194225 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1194680 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1194685 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1199185 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1199743 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1204111 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1206832 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1212149 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1217582 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1226872 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1233862 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1233870 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1252648 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1253130 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1253717 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1254287 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1254980 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1255500 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1255975 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1256666 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1260697 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1260950 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1267129 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1267412 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1273041 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1277283 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1287165 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1287716 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1291971 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1292217 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1296350 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1302102 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1304540 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1315004 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1323798 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1325407 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1326327 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1342457 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1346264 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1348953 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1356917 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1356922 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1362176 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1364451 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1366416 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1367616 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1369533 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1370709 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1379447 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1379908 00:39:30.272 Removing: /var/run/dpdk/spdk_pid1380372 00:39:30.531 Removing: /var/run/dpdk/spdk_pid1382858 00:39:30.531 Removing: /var/run/dpdk/spdk_pid1383322 00:39:30.531 Removing: /var/run/dpdk/spdk_pid1383790 00:39:30.532 Removing: /var/run/dpdk/spdk_pid1387619 00:39:30.532 Removing: /var/run/dpdk/spdk_pid1387624 00:39:30.532 Removing: /var/run/dpdk/spdk_pid1389142 00:39:30.532 Removing: /var/run/dpdk/spdk_pid1389693 00:39:30.532 Removing: /var/run/dpdk/spdk_pid1389706 00:39:30.532 Removing: /var/run/dpdk/spdk_pid914759 00:39:30.532 Removing: /var/run/dpdk/spdk_pid915827 00:39:30.532 Removing: /var/run/dpdk/spdk_pid917124 00:39:30.532 Removing: /var/run/dpdk/spdk_pid917609 00:39:30.532 Removing: /var/run/dpdk/spdk_pid918519 00:39:30.532 Removing: /var/run/dpdk/spdk_pid918734 00:39:30.532 Removing: /var/run/dpdk/spdk_pid919705 00:39:30.532 Removing: /var/run/dpdk/spdk_pid919743 00:39:30.532 Removing: /var/run/dpdk/spdk_pid920074 00:39:30.532 Removing: /var/run/dpdk/spdk_pid921812 00:39:30.532 Removing: /var/run/dpdk/spdk_pid923259 00:39:30.532 Removing: /var/run/dpdk/spdk_pid923601 00:39:30.532 Removing: /var/run/dpdk/spdk_pid923869 00:39:30.532 Removing: /var/run/dpdk/spdk_pid924061 00:39:30.532 Removing: /var/run/dpdk/spdk_pid924271 00:39:30.532 Removing: /var/run/dpdk/spdk_pid924521 00:39:30.532 Removing: /var/run/dpdk/spdk_pid924777 00:39:30.532 Removing: /var/run/dpdk/spdk_pid925058 00:39:30.532 Removing: /var/run/dpdk/spdk_pid925809 00:39:30.532 Removing: /var/run/dpdk/spdk_pid928801 00:39:30.532 Removing: /var/run/dpdk/spdk_pid929057 00:39:30.532 Removing: /var/run/dpdk/spdk_pid929313 00:39:30.532 Removing: /var/run/dpdk/spdk_pid929322 00:39:30.532 Removing: /var/run/dpdk/spdk_pid929817 00:39:30.532 Removing: /var/run/dpdk/spdk_pid929827 00:39:30.532 Removing: /var/run/dpdk/spdk_pid930315 00:39:30.532 Removing: /var/run/dpdk/spdk_pid930320 00:39:30.532 Removing: /var/run/dpdk/spdk_pid930586 00:39:30.532 Removing: /var/run/dpdk/spdk_pid930695 00:39:30.532 Removing: /var/run/dpdk/spdk_pid930854 00:39:30.532 Removing: /var/run/dpdk/spdk_pid931047 00:39:30.532 Removing: /var/run/dpdk/spdk_pid931427 00:39:30.532 Removing: /var/run/dpdk/spdk_pid931674 00:39:30.532 Removing: /var/run/dpdk/spdk_pid931974 00:39:30.532 Removing: /var/run/dpdk/spdk_pid935756 00:39:30.532 Removing: /var/run/dpdk/spdk_pid940159 00:39:30.532 Removing: /var/run/dpdk/spdk_pid950631 00:39:30.532 Removing: /var/run/dpdk/spdk_pid951235 00:39:30.532 Removing: /var/run/dpdk/spdk_pid955510 00:39:30.532 Removing: /var/run/dpdk/spdk_pid955755 00:39:30.532 Removing: /var/run/dpdk/spdk_pid960027 00:39:30.532 Removing: /var/run/dpdk/spdk_pid965913 00:39:30.532 Removing: /var/run/dpdk/spdk_pid968708 00:39:30.532 Removing: /var/run/dpdk/spdk_pid978834 00:39:30.532 Removing: /var/run/dpdk/spdk_pid987882 00:39:30.532 Removing: /var/run/dpdk/spdk_pid989516 00:39:30.532 Removing: /var/run/dpdk/spdk_pid990429 00:39:30.532 Clean 00:39:30.791 17:56:29 -- common/autotest_common.sh@1451 -- # return 0 00:39:30.791 17:56:29 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:30.791 17:56:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:30.791 17:56:29 -- common/autotest_common.sh@10 -- # set +x 00:39:30.791 17:56:29 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:30.791 17:56:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:30.791 17:56:29 -- common/autotest_common.sh@10 -- # set +x 00:39:30.791 17:56:29 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:30.791 17:56:29 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:30.791 17:56:29 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:30.791 17:56:29 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:30.791 17:56:29 -- spdk/autotest.sh@394 -- # hostname 00:39:30.791 17:56:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:31.051 geninfo: WARNING: invalid characters removed from testname! 00:39:52.987 17:56:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:54.364 17:56:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:56.269 17:56:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:57.647 17:56:56 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:59.551 17:56:58 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:01.456 17:57:00 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:03.360 17:57:02 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:03.360 17:57:02 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:40:03.360 17:57:02 -- common/autotest_common.sh@1691 -- $ lcov --version 00:40:03.360 17:57:02 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:40:03.619 17:57:02 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:40:03.619 17:57:02 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:40:03.619 17:57:02 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:40:03.619 17:57:02 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:40:03.619 17:57:02 -- scripts/common.sh@336 -- $ IFS=.-: 00:40:03.619 17:57:02 -- scripts/common.sh@336 -- $ read -ra ver1 00:40:03.619 17:57:02 -- scripts/common.sh@337 -- $ IFS=.-: 00:40:03.619 17:57:02 -- scripts/common.sh@337 -- $ read -ra ver2 00:40:03.619 17:57:02 -- scripts/common.sh@338 -- $ local 'op=<' 00:40:03.619 17:57:02 -- scripts/common.sh@340 -- $ ver1_l=2 00:40:03.619 17:57:02 -- scripts/common.sh@341 -- $ ver2_l=1 00:40:03.619 17:57:02 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:40:03.619 17:57:02 -- scripts/common.sh@344 -- $ case "$op" in 00:40:03.619 17:57:02 -- scripts/common.sh@345 -- $ : 1 00:40:03.619 17:57:02 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:40:03.619 17:57:02 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.619 17:57:02 -- scripts/common.sh@365 -- $ decimal 1 00:40:03.619 17:57:02 -- scripts/common.sh@353 -- $ local d=1 00:40:03.619 17:57:02 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:40:03.619 17:57:02 -- scripts/common.sh@355 -- $ echo 1 00:40:03.619 17:57:02 -- scripts/common.sh@365 -- $ ver1[v]=1 00:40:03.619 17:57:02 -- scripts/common.sh@366 -- $ decimal 2 00:40:03.619 17:57:02 -- scripts/common.sh@353 -- $ local d=2 00:40:03.619 17:57:02 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:40:03.619 17:57:02 -- scripts/common.sh@355 -- $ echo 2 00:40:03.620 17:57:02 -- scripts/common.sh@366 -- $ ver2[v]=2 00:40:03.620 17:57:02 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:40:03.620 17:57:02 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:40:03.620 17:57:02 -- scripts/common.sh@368 -- $ return 0 00:40:03.620 17:57:02 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.620 17:57:02 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:40:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.620 --rc genhtml_branch_coverage=1 00:40:03.620 --rc genhtml_function_coverage=1 00:40:03.620 --rc genhtml_legend=1 00:40:03.620 --rc geninfo_all_blocks=1 00:40:03.620 --rc geninfo_unexecuted_blocks=1 00:40:03.620 00:40:03.620 ' 00:40:03.620 17:57:02 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:40:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.620 --rc genhtml_branch_coverage=1 00:40:03.620 --rc genhtml_function_coverage=1 00:40:03.620 --rc genhtml_legend=1 00:40:03.620 --rc geninfo_all_blocks=1 00:40:03.620 --rc geninfo_unexecuted_blocks=1 00:40:03.620 00:40:03.620 ' 00:40:03.620 17:57:02 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:40:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.620 --rc genhtml_branch_coverage=1 00:40:03.620 --rc genhtml_function_coverage=1 00:40:03.620 --rc genhtml_legend=1 00:40:03.620 --rc geninfo_all_blocks=1 00:40:03.620 --rc geninfo_unexecuted_blocks=1 00:40:03.620 00:40:03.620 ' 00:40:03.620 17:57:02 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:40:03.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.620 --rc genhtml_branch_coverage=1 00:40:03.620 --rc genhtml_function_coverage=1 00:40:03.620 --rc genhtml_legend=1 00:40:03.620 --rc geninfo_all_blocks=1 00:40:03.620 --rc geninfo_unexecuted_blocks=1 00:40:03.620 00:40:03.620 ' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:03.620 17:57:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:40:03.620 17:57:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:03.620 17:57:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:03.620 17:57:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:03.620 17:57:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.620 17:57:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.620 17:57:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.620 17:57:02 -- paths/export.sh@5 -- $ export PATH 00:40:03.620 17:57:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.620 17:57:02 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:40:03.620 17:57:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:40:03.620 17:57:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728921422.XXXXXX 00:40:03.620 17:57:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728921422.aydWLs 00:40:03.620 17:57:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:40:03.620 17:57:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:40:03.620 17:57:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:40:03.620 17:57:02 -- common/autotest_common.sh@10 -- $ set +x 00:40:03.620 17:57:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:40:03.620 17:57:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:40:03.620 17:57:02 -- pm/common@17 -- $ local monitor 00:40:03.620 17:57:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.620 17:57:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.620 17:57:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.620 17:57:02 -- pm/common@21 -- $ date +%s 00:40:03.620 17:57:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:03.620 17:57:02 -- pm/common@21 -- $ date +%s 00:40:03.620 17:57:02 -- pm/common@25 -- $ sleep 1 00:40:03.620 17:57:02 -- pm/common@21 -- $ date +%s 00:40:03.620 17:57:02 -- pm/common@21 -- $ date +%s 00:40:03.620 17:57:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728921422 00:40:03.620 17:57:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728921422 00:40:03.620 17:57:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728921422 00:40:03.620 17:57:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728921422 00:40:03.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728921422_collect-cpu-load.pm.log 00:40:03.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728921422_collect-vmstat.pm.log 00:40:03.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728921422_collect-cpu-temp.pm.log 00:40:03.620 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728921422_collect-bmc-pm.bmc.pm.log 00:40:04.559 17:57:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:40:04.559 17:57:03 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:40:04.559 17:57:03 -- spdk/autopackage.sh@14 -- $ timing_finish 00:40:04.559 17:57:03 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:04.559 17:57:03 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:04.559 17:57:03 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:04.559 17:57:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:04.559 17:57:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:04.559 17:57:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:04.559 17:57:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:04.559 17:57:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:40:04.559 17:57:03 -- pm/common@44 -- $ pid=1400478 00:40:04.559 17:57:03 -- pm/common@50 -- $ kill -TERM 1400478 00:40:04.559 17:57:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:04.559 17:57:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:40:04.559 17:57:03 -- pm/common@44 -- $ pid=1400480 00:40:04.559 17:57:03 -- pm/common@50 -- $ kill -TERM 1400480 00:40:04.559 17:57:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:04.559 17:57:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:40:04.559 17:57:03 -- pm/common@44 -- $ pid=1400482 00:40:04.559 17:57:03 -- pm/common@50 -- $ kill -TERM 1400482 00:40:04.559 17:57:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:04.559 17:57:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:40:04.559 17:57:03 -- pm/common@44 -- $ pid=1400507 00:40:04.559 17:57:03 -- pm/common@50 -- $ sudo -E kill -TERM 1400507 00:40:04.559 + [[ -n 837751 ]] 00:40:04.559 + sudo kill 837751 00:40:04.828 [Pipeline] } 00:40:04.843 [Pipeline] // stage 00:40:04.848 [Pipeline] } 00:40:04.862 [Pipeline] // timeout 00:40:04.866 [Pipeline] } 00:40:04.880 [Pipeline] // catchError 00:40:04.885 [Pipeline] } 00:40:04.898 [Pipeline] // wrap 00:40:04.904 [Pipeline] } 00:40:04.917 [Pipeline] // catchError 00:40:04.926 [Pipeline] stage 00:40:04.928 [Pipeline] { (Epilogue) 00:40:04.940 [Pipeline] catchError 00:40:04.942 [Pipeline] { 00:40:04.986 [Pipeline] echo 00:40:04.987 Cleanup processes 00:40:04.993 [Pipeline] sh 00:40:05.278 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:05.278 1400676 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:05.278 1400978 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:05.292 [Pipeline] sh 00:40:05.576 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:05.576 ++ grep -v 'sudo pgrep' 00:40:05.576 ++ awk '{print $1}' 00:40:05.576 + sudo kill -9 1400676 00:40:05.588 [Pipeline] sh 00:40:05.871 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:18.092 [Pipeline] sh 00:40:18.377 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:18.377 Artifacts sizes are good 00:40:18.392 [Pipeline] archiveArtifacts 00:40:18.400 Archiving artifacts 00:40:18.518 [Pipeline] sh 00:40:18.804 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:18.818 [Pipeline] cleanWs 00:40:18.828 [WS-CLEANUP] Deleting project workspace... 00:40:18.828 [WS-CLEANUP] Deferred wipeout is used... 00:40:18.834 [WS-CLEANUP] done 00:40:18.837 [Pipeline] } 00:40:18.854 [Pipeline] // catchError 00:40:18.866 [Pipeline] sh 00:40:19.218 + logger -p user.info -t JENKINS-CI 00:40:19.226 [Pipeline] } 00:40:19.240 [Pipeline] // stage 00:40:19.245 [Pipeline] } 00:40:19.259 [Pipeline] // node 00:40:19.264 [Pipeline] End of Pipeline 00:40:19.299 Finished: SUCCESS